00:00:00.001 Started by upstream project "autotest-per-patch" build number 126244 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.032 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.033 The recommended git tool is: git 00:00:00.033 using credential 00000000-0000-0000-0000-000000000002 00:00:00.035 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.050 Fetching changes from the remote Git repository 00:00:00.052 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.074 Using shallow fetch with depth 1 00:00:00.074 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.074 > git --version # timeout=10 00:00:00.094 > git --version # 'git version 2.39.2' 00:00:00.094 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.108 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.108 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.284 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.293 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.303 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:02.303 > git config core.sparsecheckout # timeout=10 00:00:02.312 > git read-tree -mu HEAD # timeout=10 00:00:02.327 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:02.345 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:02.346 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:02.558 [Pipeline] Start of Pipeline 00:00:02.577 [Pipeline] library 00:00:02.579 Loading library shm_lib@master 00:00:02.579 Library shm_lib@master is cached. Copying from home. 00:00:02.595 [Pipeline] node 00:00:02.602 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:02.603 [Pipeline] { 00:00:02.613 [Pipeline] catchError 00:00:02.614 [Pipeline] { 00:00:02.626 [Pipeline] wrap 00:00:02.634 [Pipeline] { 00:00:02.640 [Pipeline] stage 00:00:02.642 [Pipeline] { (Prologue) 00:00:02.658 [Pipeline] echo 00:00:02.659 Node: VM-host-SM9 00:00:02.664 [Pipeline] cleanWs 00:00:02.671 [WS-CLEANUP] Deleting project workspace... 00:00:02.671 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.675 [WS-CLEANUP] done 00:00:02.879 [Pipeline] setCustomBuildProperty 00:00:02.945 [Pipeline] httpRequest 00:00:02.963 [Pipeline] echo 00:00:02.964 Sorcerer 10.211.164.101 is alive 00:00:02.970 [Pipeline] httpRequest 00:00:02.974 HttpMethod: GET 00:00:02.974 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:02.974 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:02.976 Response Code: HTTP/1.1 200 OK 00:00:02.976 Success: Status code 200 is in the accepted range: 200,404 00:00:02.976 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:03.119 [Pipeline] sh 00:00:03.392 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:03.403 [Pipeline] httpRequest 00:00:03.415 [Pipeline] echo 00:00:03.416 Sorcerer 10.211.164.101 is alive 00:00:03.423 [Pipeline] httpRequest 00:00:03.426 HttpMethod: GET 00:00:03.426 URL: http://10.211.164.101/packages/spdk_f8598a71feda976fd71f88dd27285aed90c31ff9.tar.gz 00:00:03.427 Sending request to url: http://10.211.164.101/packages/spdk_f8598a71feda976fd71f88dd27285aed90c31ff9.tar.gz 00:00:03.427 Response Code: HTTP/1.1 200 OK 00:00:03.428 Success: Status code 200 is in the accepted range: 200,404 00:00:03.428 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_f8598a71feda976fd71f88dd27285aed90c31ff9.tar.gz 00:00:19.422 [Pipeline] sh 00:00:19.698 + tar --no-same-owner -xf spdk_f8598a71feda976fd71f88dd27285aed90c31ff9.tar.gz 00:00:22.992 [Pipeline] sh 00:00:23.270 + git -C spdk log --oneline -n5 00:00:23.270 f8598a71f bdev/uring: use util functions in bdev_uring_check_zoned_support 00:00:23.270 4903ec649 ublk: use spdk_read_sysfs_attribute_uint32 to get max ublks 00:00:23.270 94c9ab717 util: add spdk_read_sysfs_attribute_uint32 00:00:23.270 a940d3681 util: add spdk_read_sysfs_attribute 00:00:23.270 f604975ba doc: fix deprecation.md typo 00:00:23.296 [Pipeline] writeFile 00:00:23.318 [Pipeline] sh 00:00:23.597 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:23.611 [Pipeline] sh 00:00:23.889 + cat autorun-spdk.conf 00:00:23.889 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:23.889 SPDK_TEST_NVMF=1 00:00:23.889 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:23.889 SPDK_TEST_USDT=1 00:00:23.889 SPDK_TEST_NVMF_MDNS=1 00:00:23.889 SPDK_RUN_UBSAN=1 00:00:23.889 NET_TYPE=virt 00:00:23.889 SPDK_JSONRPC_GO_CLIENT=1 00:00:23.889 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:23.896 RUN_NIGHTLY=0 00:00:23.898 [Pipeline] } 00:00:23.916 [Pipeline] // stage 00:00:23.933 [Pipeline] stage 00:00:23.935 [Pipeline] { (Run VM) 00:00:23.951 [Pipeline] sh 00:00:24.230 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:24.230 + echo 'Start stage prepare_nvme.sh' 00:00:24.230 Start stage prepare_nvme.sh 00:00:24.230 + [[ -n 0 ]] 00:00:24.230 + disk_prefix=ex0 00:00:24.230 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:00:24.230 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:00:24.230 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:00:24.230 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:24.230 ++ SPDK_TEST_NVMF=1 00:00:24.230 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:24.230 ++ SPDK_TEST_USDT=1 00:00:24.230 ++ SPDK_TEST_NVMF_MDNS=1 00:00:24.230 ++ SPDK_RUN_UBSAN=1 00:00:24.230 ++ NET_TYPE=virt 00:00:24.230 ++ SPDK_JSONRPC_GO_CLIENT=1 00:00:24.230 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:24.230 ++ RUN_NIGHTLY=0 00:00:24.230 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:24.230 + nvme_files=() 00:00:24.230 + declare -A nvme_files 00:00:24.230 + backend_dir=/var/lib/libvirt/images/backends 00:00:24.230 + nvme_files['nvme.img']=5G 00:00:24.230 + nvme_files['nvme-cmb.img']=5G 00:00:24.230 + nvme_files['nvme-multi0.img']=4G 00:00:24.230 + nvme_files['nvme-multi1.img']=4G 00:00:24.230 + nvme_files['nvme-multi2.img']=4G 00:00:24.230 + nvme_files['nvme-openstack.img']=8G 00:00:24.230 + nvme_files['nvme-zns.img']=5G 00:00:24.230 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:24.230 + (( SPDK_TEST_FTL == 1 )) 00:00:24.230 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:24.230 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:24.230 + for nvme in "${!nvme_files[@]}" 00:00:24.231 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:00:24.231 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:24.231 + for nvme in "${!nvme_files[@]}" 00:00:24.231 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:00:24.491 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:24.491 + for nvme in "${!nvme_files[@]}" 00:00:24.491 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:00:24.491 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:24.491 + for nvme in "${!nvme_files[@]}" 00:00:24.491 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:00:24.491 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:24.491 + for nvme in "${!nvme_files[@]}" 00:00:24.491 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:00:24.491 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:24.491 + for nvme in "${!nvme_files[@]}" 00:00:24.491 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:00:24.491 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:24.491 + for nvme in "${!nvme_files[@]}" 00:00:24.491 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:00:24.753 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:24.753 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:00:24.753 + echo 'End stage prepare_nvme.sh' 00:00:24.753 End stage prepare_nvme.sh 00:00:24.764 [Pipeline] sh 00:00:25.043 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:25.043 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora38 00:00:25.043 00:00:25.043 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:00:25.043 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:00:25.043 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:25.043 HELP=0 00:00:25.043 DRY_RUN=0 00:00:25.043 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:00:25.043 NVME_DISKS_TYPE=nvme,nvme, 00:00:25.043 NVME_AUTO_CREATE=0 00:00:25.043 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:00:25.043 NVME_CMB=,, 00:00:25.043 NVME_PMR=,, 00:00:25.043 NVME_ZNS=,, 00:00:25.043 NVME_MS=,, 00:00:25.043 NVME_FDP=,, 00:00:25.043 SPDK_VAGRANT_DISTRO=fedora38 00:00:25.043 SPDK_VAGRANT_VMCPU=10 00:00:25.043 SPDK_VAGRANT_VMRAM=12288 00:00:25.043 SPDK_VAGRANT_PROVIDER=libvirt 00:00:25.043 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:25.043 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:25.043 SPDK_OPENSTACK_NETWORK=0 00:00:25.043 VAGRANT_PACKAGE_BOX=0 00:00:25.043 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:25.043 FORCE_DISTRO=true 00:00:25.043 VAGRANT_BOX_VERSION= 00:00:25.043 EXTRA_VAGRANTFILES= 00:00:25.043 NIC_MODEL=e1000 00:00:25.043 00:00:25.043 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:00:25.043 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:29.224 Bringing machine 'default' up with 'libvirt' provider... 00:00:29.224 ==> default: Creating image (snapshot of base box volume). 00:00:29.481 ==> default: Creating domain with the following settings... 00:00:29.481 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721074610_97260e8f0fb40f120a64 00:00:29.481 ==> default: -- Domain type: kvm 00:00:29.481 ==> default: -- Cpus: 10 00:00:29.481 ==> default: -- Feature: acpi 00:00:29.481 ==> default: -- Feature: apic 00:00:29.481 ==> default: -- Feature: pae 00:00:29.481 ==> default: -- Memory: 12288M 00:00:29.481 ==> default: -- Memory Backing: hugepages: 00:00:29.481 ==> default: -- Management MAC: 00:00:29.481 ==> default: -- Loader: 00:00:29.481 ==> default: -- Nvram: 00:00:29.481 ==> default: -- Base box: spdk/fedora38 00:00:29.481 ==> default: -- Storage pool: default 00:00:29.481 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721074610_97260e8f0fb40f120a64.img (20G) 00:00:29.481 ==> default: -- Volume Cache: default 00:00:29.481 ==> default: -- Kernel: 00:00:29.481 ==> default: -- Initrd: 00:00:29.481 ==> default: -- Graphics Type: vnc 00:00:29.481 ==> default: -- Graphics Port: -1 00:00:29.481 ==> default: -- Graphics IP: 127.0.0.1 00:00:29.481 ==> default: -- Graphics Password: Not defined 00:00:29.481 ==> default: -- Video Type: cirrus 00:00:29.481 ==> default: -- Video VRAM: 9216 00:00:29.481 ==> default: -- Sound Type: 00:00:29.481 ==> default: -- Keymap: en-us 00:00:29.481 ==> default: -- TPM Path: 00:00:29.481 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:29.481 ==> default: -- Command line args: 00:00:29.481 ==> default: -> value=-device, 00:00:29.481 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:29.481 ==> default: -> value=-drive, 00:00:29.481 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:00:29.481 ==> default: -> value=-device, 00:00:29.481 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.481 ==> default: -> value=-device, 00:00:29.481 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:29.481 ==> default: -> value=-drive, 00:00:29.481 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:29.481 ==> default: -> value=-device, 00:00:29.481 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.481 ==> default: -> value=-drive, 00:00:29.481 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:29.481 ==> default: -> value=-device, 00:00:29.481 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.481 ==> default: -> value=-drive, 00:00:29.481 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:29.481 ==> default: -> value=-device, 00:00:29.481 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.481 ==> default: Creating shared folders metadata... 00:00:29.481 ==> default: Starting domain. 00:00:30.855 ==> default: Waiting for domain to get an IP address... 00:00:48.996 ==> default: Waiting for SSH to become available... 00:00:48.996 ==> default: Configuring and enabling network interfaces... 00:00:52.273 default: SSH address: 192.168.121.222:22 00:00:52.273 default: SSH username: vagrant 00:00:52.273 default: SSH auth method: private key 00:00:54.808 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:02.914 ==> default: Mounting SSHFS shared folder... 00:01:03.171 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:03.171 ==> default: Checking Mount.. 00:01:04.542 ==> default: Folder Successfully Mounted! 00:01:04.542 ==> default: Running provisioner: file... 00:01:05.109 default: ~/.gitconfig => .gitconfig 00:01:05.367 00:01:05.367 SUCCESS! 00:01:05.367 00:01:05.367 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:05.367 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:05.367 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:05.367 00:01:05.376 [Pipeline] } 00:01:05.395 [Pipeline] // stage 00:01:05.405 [Pipeline] dir 00:01:05.405 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:01:05.407 [Pipeline] { 00:01:05.424 [Pipeline] catchError 00:01:05.426 [Pipeline] { 00:01:05.442 [Pipeline] sh 00:01:05.719 + vagrant ssh-config --host vagrant 00:01:05.719 + sed -ne /^Host/,$p 00:01:05.719 + tee ssh_conf 00:01:09.899 Host vagrant 00:01:09.899 HostName 192.168.121.222 00:01:09.899 User vagrant 00:01:09.899 Port 22 00:01:09.899 UserKnownHostsFile /dev/null 00:01:09.899 StrictHostKeyChecking no 00:01:09.899 PasswordAuthentication no 00:01:09.899 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:09.899 IdentitiesOnly yes 00:01:09.899 LogLevel FATAL 00:01:09.899 ForwardAgent yes 00:01:09.899 ForwardX11 yes 00:01:09.899 00:01:09.916 [Pipeline] withEnv 00:01:09.918 [Pipeline] { 00:01:09.933 [Pipeline] sh 00:01:10.210 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:10.210 source /etc/os-release 00:01:10.210 [[ -e /image.version ]] && img=$(< /image.version) 00:01:10.210 # Minimal, systemd-like check. 00:01:10.210 if [[ -e /.dockerenv ]]; then 00:01:10.210 # Clear garbage from the node's name: 00:01:10.210 # agt-er_autotest_547-896 -> autotest_547-896 00:01:10.210 # $HOSTNAME is the actual container id 00:01:10.210 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:10.210 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:10.210 # We can assume this is a mount from a host where container is running, 00:01:10.210 # so fetch its hostname to easily identify the target swarm worker. 00:01:10.210 container="$(< /etc/hostname) ($agent)" 00:01:10.210 else 00:01:10.210 # Fallback 00:01:10.210 container=$agent 00:01:10.210 fi 00:01:10.210 fi 00:01:10.210 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:10.210 00:01:10.219 [Pipeline] } 00:01:10.234 [Pipeline] // withEnv 00:01:10.242 [Pipeline] setCustomBuildProperty 00:01:10.256 [Pipeline] stage 00:01:10.258 [Pipeline] { (Tests) 00:01:10.276 [Pipeline] sh 00:01:10.551 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:10.819 [Pipeline] sh 00:01:11.091 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:11.106 [Pipeline] timeout 00:01:11.106 Timeout set to expire in 40 min 00:01:11.108 [Pipeline] { 00:01:11.125 [Pipeline] sh 00:01:11.400 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:11.988 HEAD is now at f8598a71f bdev/uring: use util functions in bdev_uring_check_zoned_support 00:01:11.995 [Pipeline] sh 00:01:12.273 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:12.545 [Pipeline] sh 00:01:12.824 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:13.098 [Pipeline] sh 00:01:13.376 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:01:13.376 ++ readlink -f spdk_repo 00:01:13.376 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:13.376 + [[ -n /home/vagrant/spdk_repo ]] 00:01:13.376 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:13.376 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:13.376 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:13.376 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:13.376 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:13.376 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:01:13.376 + cd /home/vagrant/spdk_repo 00:01:13.376 + source /etc/os-release 00:01:13.376 ++ NAME='Fedora Linux' 00:01:13.376 ++ VERSION='38 (Cloud Edition)' 00:01:13.376 ++ ID=fedora 00:01:13.376 ++ VERSION_ID=38 00:01:13.376 ++ VERSION_CODENAME= 00:01:13.376 ++ PLATFORM_ID=platform:f38 00:01:13.376 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:13.376 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:13.376 ++ LOGO=fedora-logo-icon 00:01:13.376 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:13.376 ++ HOME_URL=https://fedoraproject.org/ 00:01:13.376 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:13.376 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:13.376 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:13.376 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:13.376 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:13.376 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:13.376 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:13.376 ++ SUPPORT_END=2024-05-14 00:01:13.376 ++ VARIANT='Cloud Edition' 00:01:13.376 ++ VARIANT_ID=cloud 00:01:13.376 + uname -a 00:01:13.376 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:13.376 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:13.942 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:13.942 Hugepages 00:01:13.942 node hugesize free / total 00:01:13.942 node0 1048576kB 0 / 0 00:01:13.942 node0 2048kB 0 / 0 00:01:13.942 00:01:13.942 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:13.942 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:13.942 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:13.942 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:13.942 + rm -f /tmp/spdk-ld-path 00:01:13.942 + source autorun-spdk.conf 00:01:13.942 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.942 ++ SPDK_TEST_NVMF=1 00:01:13.942 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.942 ++ SPDK_TEST_USDT=1 00:01:13.942 ++ SPDK_TEST_NVMF_MDNS=1 00:01:13.942 ++ SPDK_RUN_UBSAN=1 00:01:13.942 ++ NET_TYPE=virt 00:01:13.942 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:13.942 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:13.942 ++ RUN_NIGHTLY=0 00:01:13.942 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:13.942 + [[ -n '' ]] 00:01:13.942 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:14.202 + for M in /var/spdk/build-*-manifest.txt 00:01:14.202 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:14.202 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:14.202 + for M in /var/spdk/build-*-manifest.txt 00:01:14.202 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:14.202 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:14.202 ++ uname 00:01:14.202 + [[ Linux == \L\i\n\u\x ]] 00:01:14.202 + sudo dmesg -T 00:01:14.202 + sudo dmesg --clear 00:01:14.202 + dmesg_pid=5154 00:01:14.202 + sudo dmesg -Tw 00:01:14.202 + [[ Fedora Linux == FreeBSD ]] 00:01:14.202 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.202 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.202 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:14.202 + [[ -x /usr/src/fio-static/fio ]] 00:01:14.202 + export FIO_BIN=/usr/src/fio-static/fio 00:01:14.202 + FIO_BIN=/usr/src/fio-static/fio 00:01:14.202 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:14.202 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:14.202 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:14.202 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.202 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.202 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:14.202 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.202 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.202 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:14.202 Test configuration: 00:01:14.202 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.202 SPDK_TEST_NVMF=1 00:01:14.202 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.202 SPDK_TEST_USDT=1 00:01:14.202 SPDK_TEST_NVMF_MDNS=1 00:01:14.202 SPDK_RUN_UBSAN=1 00:01:14.202 NET_TYPE=virt 00:01:14.202 SPDK_JSONRPC_GO_CLIENT=1 00:01:14.202 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:14.202 RUN_NIGHTLY=0 20:17:35 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:14.202 20:17:35 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:14.202 20:17:35 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:14.202 20:17:35 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:14.202 20:17:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.202 20:17:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.202 20:17:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.202 20:17:35 -- paths/export.sh@5 -- $ export PATH 00:01:14.202 20:17:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.202 20:17:35 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:14.202 20:17:35 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:14.202 20:17:35 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721074655.XXXXXX 00:01:14.202 20:17:35 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721074655.IFPd9L 00:01:14.202 20:17:35 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:14.202 20:17:35 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:14.202 20:17:35 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:14.202 20:17:35 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:14.202 20:17:35 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:14.202 20:17:35 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:14.202 20:17:35 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:14.202 20:17:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.202 20:17:35 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:01:14.202 20:17:35 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:14.202 20:17:35 -- pm/common@17 -- $ local monitor 00:01:14.202 20:17:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.202 20:17:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.202 20:17:35 -- pm/common@25 -- $ sleep 1 00:01:14.202 20:17:35 -- pm/common@21 -- $ date +%s 00:01:14.202 20:17:35 -- pm/common@21 -- $ date +%s 00:01:14.202 20:17:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721074655 00:01:14.202 20:17:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721074655 00:01:14.202 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721074655_collect-vmstat.pm.log 00:01:14.202 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721074655_collect-cpu-load.pm.log 00:01:15.583 20:17:36 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:15.583 20:17:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:15.583 20:17:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:15.583 20:17:36 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:15.583 20:17:36 -- spdk/autobuild.sh@16 -- $ date -u 00:01:15.583 Mon Jul 15 08:17:36 PM UTC 2024 00:01:15.583 20:17:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:15.583 v24.09-pre-214-gf8598a71f 00:01:15.583 20:17:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:15.583 20:17:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:15.583 20:17:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:15.583 20:17:36 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:15.583 20:17:36 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:15.583 20:17:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.583 ************************************ 00:01:15.583 START TEST ubsan 00:01:15.583 ************************************ 00:01:15.583 using ubsan 00:01:15.583 20:17:36 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:15.583 00:01:15.583 real 0m0.000s 00:01:15.583 user 0m0.000s 00:01:15.583 sys 0m0.000s 00:01:15.583 20:17:36 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:15.583 20:17:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:15.583 ************************************ 00:01:15.583 END TEST ubsan 00:01:15.583 ************************************ 00:01:15.583 20:17:36 -- common/autotest_common.sh@1142 -- $ return 0 00:01:15.583 20:17:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:15.583 20:17:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:15.583 20:17:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:15.583 20:17:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:15.583 20:17:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:15.583 20:17:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:15.584 20:17:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:15.584 20:17:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:15.584 20:17:36 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:01:15.584 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:15.584 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:16.520 Using 'verbs' RDMA provider 00:01:29.651 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:44.535 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:44.535 go version go1.21.1 linux/amd64 00:01:44.535 Creating mk/config.mk...done. 00:01:44.535 Creating mk/cc.flags.mk...done. 00:01:44.535 Type 'make' to build. 00:01:44.535 20:18:04 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:44.535 20:18:04 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:44.535 20:18:04 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:44.535 20:18:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.535 ************************************ 00:01:44.535 START TEST make 00:01:44.535 ************************************ 00:01:44.535 20:18:04 make -- common/autotest_common.sh@1123 -- $ make -j10 00:01:44.535 make[1]: Nothing to be done for 'all'. 00:01:59.446 The Meson build system 00:01:59.446 Version: 1.3.1 00:01:59.446 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:59.446 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:59.446 Build type: native build 00:01:59.446 Program cat found: YES (/usr/bin/cat) 00:01:59.446 Project name: DPDK 00:01:59.446 Project version: 24.03.0 00:01:59.446 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:59.446 C linker for the host machine: cc ld.bfd 2.39-16 00:01:59.446 Host machine cpu family: x86_64 00:01:59.446 Host machine cpu: x86_64 00:01:59.446 Message: ## Building in Developer Mode ## 00:01:59.446 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:59.446 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:59.446 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:59.446 Program python3 found: YES (/usr/bin/python3) 00:01:59.446 Program cat found: YES (/usr/bin/cat) 00:01:59.446 Compiler for C supports arguments -march=native: YES 00:01:59.446 Checking for size of "void *" : 8 00:01:59.446 Checking for size of "void *" : 8 (cached) 00:01:59.446 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:59.446 Library m found: YES 00:01:59.446 Library numa found: YES 00:01:59.446 Has header "numaif.h" : YES 00:01:59.446 Library fdt found: NO 00:01:59.446 Library execinfo found: NO 00:01:59.446 Has header "execinfo.h" : YES 00:01:59.446 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:59.446 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:59.446 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:59.446 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:59.446 Run-time dependency openssl found: YES 3.0.9 00:01:59.446 Run-time dependency libpcap found: YES 1.10.4 00:01:59.446 Has header "pcap.h" with dependency libpcap: YES 00:01:59.446 Compiler for C supports arguments -Wcast-qual: YES 00:01:59.446 Compiler for C supports arguments -Wdeprecated: YES 00:01:59.446 Compiler for C supports arguments -Wformat: YES 00:01:59.446 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:59.446 Compiler for C supports arguments -Wformat-security: NO 00:01:59.446 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:59.446 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:59.446 Compiler for C supports arguments -Wnested-externs: YES 00:01:59.446 Compiler for C supports arguments -Wold-style-definition: YES 00:01:59.446 Compiler for C supports arguments -Wpointer-arith: YES 00:01:59.446 Compiler for C supports arguments -Wsign-compare: YES 00:01:59.446 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:59.446 Compiler for C supports arguments -Wundef: YES 00:01:59.446 Compiler for C supports arguments -Wwrite-strings: YES 00:01:59.446 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:59.446 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:59.446 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:59.446 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:59.446 Program objdump found: YES (/usr/bin/objdump) 00:01:59.446 Compiler for C supports arguments -mavx512f: YES 00:01:59.446 Checking if "AVX512 checking" compiles: YES 00:01:59.446 Fetching value of define "__SSE4_2__" : 1 00:01:59.446 Fetching value of define "__AES__" : 1 00:01:59.446 Fetching value of define "__AVX__" : 1 00:01:59.446 Fetching value of define "__AVX2__" : 1 00:01:59.446 Fetching value of define "__AVX512BW__" : (undefined) 00:01:59.446 Fetching value of define "__AVX512CD__" : (undefined) 00:01:59.446 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:59.446 Fetching value of define "__AVX512F__" : (undefined) 00:01:59.446 Fetching value of define "__AVX512VL__" : (undefined) 00:01:59.446 Fetching value of define "__PCLMUL__" : 1 00:01:59.446 Fetching value of define "__RDRND__" : 1 00:01:59.446 Fetching value of define "__RDSEED__" : 1 00:01:59.446 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:59.446 Fetching value of define "__znver1__" : (undefined) 00:01:59.446 Fetching value of define "__znver2__" : (undefined) 00:01:59.446 Fetching value of define "__znver3__" : (undefined) 00:01:59.446 Fetching value of define "__znver4__" : (undefined) 00:01:59.446 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:59.446 Message: lib/log: Defining dependency "log" 00:01:59.446 Message: lib/kvargs: Defining dependency "kvargs" 00:01:59.446 Message: lib/telemetry: Defining dependency "telemetry" 00:01:59.446 Checking for function "getentropy" : NO 00:01:59.446 Message: lib/eal: Defining dependency "eal" 00:01:59.446 Message: lib/ring: Defining dependency "ring" 00:01:59.446 Message: lib/rcu: Defining dependency "rcu" 00:01:59.446 Message: lib/mempool: Defining dependency "mempool" 00:01:59.446 Message: lib/mbuf: Defining dependency "mbuf" 00:01:59.446 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:59.446 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:59.446 Compiler for C supports arguments -mpclmul: YES 00:01:59.446 Compiler for C supports arguments -maes: YES 00:01:59.446 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:59.446 Compiler for C supports arguments -mavx512bw: YES 00:01:59.446 Compiler for C supports arguments -mavx512dq: YES 00:01:59.446 Compiler for C supports arguments -mavx512vl: YES 00:01:59.446 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:59.446 Compiler for C supports arguments -mavx2: YES 00:01:59.446 Compiler for C supports arguments -mavx: YES 00:01:59.446 Message: lib/net: Defining dependency "net" 00:01:59.446 Message: lib/meter: Defining dependency "meter" 00:01:59.446 Message: lib/ethdev: Defining dependency "ethdev" 00:01:59.446 Message: lib/pci: Defining dependency "pci" 00:01:59.446 Message: lib/cmdline: Defining dependency "cmdline" 00:01:59.446 Message: lib/hash: Defining dependency "hash" 00:01:59.446 Message: lib/timer: Defining dependency "timer" 00:01:59.446 Message: lib/compressdev: Defining dependency "compressdev" 00:01:59.446 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:59.446 Message: lib/dmadev: Defining dependency "dmadev" 00:01:59.446 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:59.446 Message: lib/power: Defining dependency "power" 00:01:59.446 Message: lib/reorder: Defining dependency "reorder" 00:01:59.446 Message: lib/security: Defining dependency "security" 00:01:59.446 Has header "linux/userfaultfd.h" : YES 00:01:59.446 Has header "linux/vduse.h" : YES 00:01:59.446 Message: lib/vhost: Defining dependency "vhost" 00:01:59.446 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:59.446 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:59.446 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:59.446 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:59.446 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:59.446 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:59.446 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:59.446 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:59.446 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:59.446 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:59.446 Program doxygen found: YES (/usr/bin/doxygen) 00:01:59.446 Configuring doxy-api-html.conf using configuration 00:01:59.446 Configuring doxy-api-man.conf using configuration 00:01:59.446 Program mandb found: YES (/usr/bin/mandb) 00:01:59.446 Program sphinx-build found: NO 00:01:59.446 Configuring rte_build_config.h using configuration 00:01:59.446 Message: 00:01:59.446 ================= 00:01:59.446 Applications Enabled 00:01:59.446 ================= 00:01:59.446 00:01:59.446 apps: 00:01:59.446 00:01:59.446 00:01:59.446 Message: 00:01:59.446 ================= 00:01:59.446 Libraries Enabled 00:01:59.446 ================= 00:01:59.446 00:01:59.446 libs: 00:01:59.446 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:59.446 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:59.446 cryptodev, dmadev, power, reorder, security, vhost, 00:01:59.446 00:01:59.446 Message: 00:01:59.446 =============== 00:01:59.446 Drivers Enabled 00:01:59.446 =============== 00:01:59.446 00:01:59.446 common: 00:01:59.446 00:01:59.446 bus: 00:01:59.446 pci, vdev, 00:01:59.446 mempool: 00:01:59.446 ring, 00:01:59.446 dma: 00:01:59.446 00:01:59.446 net: 00:01:59.446 00:01:59.446 crypto: 00:01:59.446 00:01:59.446 compress: 00:01:59.446 00:01:59.446 vdpa: 00:01:59.446 00:01:59.446 00:01:59.446 Message: 00:01:59.446 ================= 00:01:59.446 Content Skipped 00:01:59.446 ================= 00:01:59.446 00:01:59.446 apps: 00:01:59.446 dumpcap: explicitly disabled via build config 00:01:59.446 graph: explicitly disabled via build config 00:01:59.446 pdump: explicitly disabled via build config 00:01:59.446 proc-info: explicitly disabled via build config 00:01:59.446 test-acl: explicitly disabled via build config 00:01:59.446 test-bbdev: explicitly disabled via build config 00:01:59.446 test-cmdline: explicitly disabled via build config 00:01:59.446 test-compress-perf: explicitly disabled via build config 00:01:59.446 test-crypto-perf: explicitly disabled via build config 00:01:59.446 test-dma-perf: explicitly disabled via build config 00:01:59.447 test-eventdev: explicitly disabled via build config 00:01:59.447 test-fib: explicitly disabled via build config 00:01:59.447 test-flow-perf: explicitly disabled via build config 00:01:59.447 test-gpudev: explicitly disabled via build config 00:01:59.447 test-mldev: explicitly disabled via build config 00:01:59.447 test-pipeline: explicitly disabled via build config 00:01:59.447 test-pmd: explicitly disabled via build config 00:01:59.447 test-regex: explicitly disabled via build config 00:01:59.447 test-sad: explicitly disabled via build config 00:01:59.447 test-security-perf: explicitly disabled via build config 00:01:59.447 00:01:59.447 libs: 00:01:59.447 argparse: explicitly disabled via build config 00:01:59.447 metrics: explicitly disabled via build config 00:01:59.447 acl: explicitly disabled via build config 00:01:59.447 bbdev: explicitly disabled via build config 00:01:59.447 bitratestats: explicitly disabled via build config 00:01:59.447 bpf: explicitly disabled via build config 00:01:59.447 cfgfile: explicitly disabled via build config 00:01:59.447 distributor: explicitly disabled via build config 00:01:59.447 efd: explicitly disabled via build config 00:01:59.447 eventdev: explicitly disabled via build config 00:01:59.447 dispatcher: explicitly disabled via build config 00:01:59.447 gpudev: explicitly disabled via build config 00:01:59.447 gro: explicitly disabled via build config 00:01:59.447 gso: explicitly disabled via build config 00:01:59.447 ip_frag: explicitly disabled via build config 00:01:59.447 jobstats: explicitly disabled via build config 00:01:59.447 latencystats: explicitly disabled via build config 00:01:59.447 lpm: explicitly disabled via build config 00:01:59.447 member: explicitly disabled via build config 00:01:59.447 pcapng: explicitly disabled via build config 00:01:59.447 rawdev: explicitly disabled via build config 00:01:59.447 regexdev: explicitly disabled via build config 00:01:59.447 mldev: explicitly disabled via build config 00:01:59.447 rib: explicitly disabled via build config 00:01:59.447 sched: explicitly disabled via build config 00:01:59.447 stack: explicitly disabled via build config 00:01:59.447 ipsec: explicitly disabled via build config 00:01:59.447 pdcp: explicitly disabled via build config 00:01:59.447 fib: explicitly disabled via build config 00:01:59.447 port: explicitly disabled via build config 00:01:59.447 pdump: explicitly disabled via build config 00:01:59.447 table: explicitly disabled via build config 00:01:59.447 pipeline: explicitly disabled via build config 00:01:59.447 graph: explicitly disabled via build config 00:01:59.447 node: explicitly disabled via build config 00:01:59.447 00:01:59.447 drivers: 00:01:59.447 common/cpt: not in enabled drivers build config 00:01:59.447 common/dpaax: not in enabled drivers build config 00:01:59.447 common/iavf: not in enabled drivers build config 00:01:59.447 common/idpf: not in enabled drivers build config 00:01:59.447 common/ionic: not in enabled drivers build config 00:01:59.447 common/mvep: not in enabled drivers build config 00:01:59.447 common/octeontx: not in enabled drivers build config 00:01:59.447 bus/auxiliary: not in enabled drivers build config 00:01:59.447 bus/cdx: not in enabled drivers build config 00:01:59.447 bus/dpaa: not in enabled drivers build config 00:01:59.447 bus/fslmc: not in enabled drivers build config 00:01:59.447 bus/ifpga: not in enabled drivers build config 00:01:59.447 bus/platform: not in enabled drivers build config 00:01:59.447 bus/uacce: not in enabled drivers build config 00:01:59.447 bus/vmbus: not in enabled drivers build config 00:01:59.447 common/cnxk: not in enabled drivers build config 00:01:59.447 common/mlx5: not in enabled drivers build config 00:01:59.447 common/nfp: not in enabled drivers build config 00:01:59.447 common/nitrox: not in enabled drivers build config 00:01:59.447 common/qat: not in enabled drivers build config 00:01:59.447 common/sfc_efx: not in enabled drivers build config 00:01:59.447 mempool/bucket: not in enabled drivers build config 00:01:59.447 mempool/cnxk: not in enabled drivers build config 00:01:59.447 mempool/dpaa: not in enabled drivers build config 00:01:59.447 mempool/dpaa2: not in enabled drivers build config 00:01:59.447 mempool/octeontx: not in enabled drivers build config 00:01:59.447 mempool/stack: not in enabled drivers build config 00:01:59.447 dma/cnxk: not in enabled drivers build config 00:01:59.447 dma/dpaa: not in enabled drivers build config 00:01:59.447 dma/dpaa2: not in enabled drivers build config 00:01:59.447 dma/hisilicon: not in enabled drivers build config 00:01:59.447 dma/idxd: not in enabled drivers build config 00:01:59.447 dma/ioat: not in enabled drivers build config 00:01:59.447 dma/skeleton: not in enabled drivers build config 00:01:59.447 net/af_packet: not in enabled drivers build config 00:01:59.447 net/af_xdp: not in enabled drivers build config 00:01:59.447 net/ark: not in enabled drivers build config 00:01:59.447 net/atlantic: not in enabled drivers build config 00:01:59.447 net/avp: not in enabled drivers build config 00:01:59.447 net/axgbe: not in enabled drivers build config 00:01:59.447 net/bnx2x: not in enabled drivers build config 00:01:59.447 net/bnxt: not in enabled drivers build config 00:01:59.447 net/bonding: not in enabled drivers build config 00:01:59.447 net/cnxk: not in enabled drivers build config 00:01:59.447 net/cpfl: not in enabled drivers build config 00:01:59.447 net/cxgbe: not in enabled drivers build config 00:01:59.447 net/dpaa: not in enabled drivers build config 00:01:59.447 net/dpaa2: not in enabled drivers build config 00:01:59.447 net/e1000: not in enabled drivers build config 00:01:59.447 net/ena: not in enabled drivers build config 00:01:59.447 net/enetc: not in enabled drivers build config 00:01:59.447 net/enetfec: not in enabled drivers build config 00:01:59.447 net/enic: not in enabled drivers build config 00:01:59.447 net/failsafe: not in enabled drivers build config 00:01:59.447 net/fm10k: not in enabled drivers build config 00:01:59.447 net/gve: not in enabled drivers build config 00:01:59.447 net/hinic: not in enabled drivers build config 00:01:59.447 net/hns3: not in enabled drivers build config 00:01:59.447 net/i40e: not in enabled drivers build config 00:01:59.447 net/iavf: not in enabled drivers build config 00:01:59.447 net/ice: not in enabled drivers build config 00:01:59.447 net/idpf: not in enabled drivers build config 00:01:59.447 net/igc: not in enabled drivers build config 00:01:59.447 net/ionic: not in enabled drivers build config 00:01:59.447 net/ipn3ke: not in enabled drivers build config 00:01:59.447 net/ixgbe: not in enabled drivers build config 00:01:59.447 net/mana: not in enabled drivers build config 00:01:59.447 net/memif: not in enabled drivers build config 00:01:59.447 net/mlx4: not in enabled drivers build config 00:01:59.447 net/mlx5: not in enabled drivers build config 00:01:59.447 net/mvneta: not in enabled drivers build config 00:01:59.447 net/mvpp2: not in enabled drivers build config 00:01:59.447 net/netvsc: not in enabled drivers build config 00:01:59.447 net/nfb: not in enabled drivers build config 00:01:59.447 net/nfp: not in enabled drivers build config 00:01:59.447 net/ngbe: not in enabled drivers build config 00:01:59.447 net/null: not in enabled drivers build config 00:01:59.447 net/octeontx: not in enabled drivers build config 00:01:59.447 net/octeon_ep: not in enabled drivers build config 00:01:59.447 net/pcap: not in enabled drivers build config 00:01:59.447 net/pfe: not in enabled drivers build config 00:01:59.447 net/qede: not in enabled drivers build config 00:01:59.447 net/ring: not in enabled drivers build config 00:01:59.447 net/sfc: not in enabled drivers build config 00:01:59.447 net/softnic: not in enabled drivers build config 00:01:59.447 net/tap: not in enabled drivers build config 00:01:59.447 net/thunderx: not in enabled drivers build config 00:01:59.447 net/txgbe: not in enabled drivers build config 00:01:59.447 net/vdev_netvsc: not in enabled drivers build config 00:01:59.447 net/vhost: not in enabled drivers build config 00:01:59.447 net/virtio: not in enabled drivers build config 00:01:59.447 net/vmxnet3: not in enabled drivers build config 00:01:59.447 raw/*: missing internal dependency, "rawdev" 00:01:59.447 crypto/armv8: not in enabled drivers build config 00:01:59.447 crypto/bcmfs: not in enabled drivers build config 00:01:59.447 crypto/caam_jr: not in enabled drivers build config 00:01:59.447 crypto/ccp: not in enabled drivers build config 00:01:59.447 crypto/cnxk: not in enabled drivers build config 00:01:59.447 crypto/dpaa_sec: not in enabled drivers build config 00:01:59.447 crypto/dpaa2_sec: not in enabled drivers build config 00:01:59.447 crypto/ipsec_mb: not in enabled drivers build config 00:01:59.447 crypto/mlx5: not in enabled drivers build config 00:01:59.447 crypto/mvsam: not in enabled drivers build config 00:01:59.447 crypto/nitrox: not in enabled drivers build config 00:01:59.447 crypto/null: not in enabled drivers build config 00:01:59.447 crypto/octeontx: not in enabled drivers build config 00:01:59.447 crypto/openssl: not in enabled drivers build config 00:01:59.447 crypto/scheduler: not in enabled drivers build config 00:01:59.447 crypto/uadk: not in enabled drivers build config 00:01:59.447 crypto/virtio: not in enabled drivers build config 00:01:59.447 compress/isal: not in enabled drivers build config 00:01:59.447 compress/mlx5: not in enabled drivers build config 00:01:59.447 compress/nitrox: not in enabled drivers build config 00:01:59.447 compress/octeontx: not in enabled drivers build config 00:01:59.447 compress/zlib: not in enabled drivers build config 00:01:59.447 regex/*: missing internal dependency, "regexdev" 00:01:59.447 ml/*: missing internal dependency, "mldev" 00:01:59.447 vdpa/ifc: not in enabled drivers build config 00:01:59.447 vdpa/mlx5: not in enabled drivers build config 00:01:59.447 vdpa/nfp: not in enabled drivers build config 00:01:59.447 vdpa/sfc: not in enabled drivers build config 00:01:59.447 event/*: missing internal dependency, "eventdev" 00:01:59.447 baseband/*: missing internal dependency, "bbdev" 00:01:59.447 gpu/*: missing internal dependency, "gpudev" 00:01:59.447 00:01:59.447 00:01:59.447 Build targets in project: 85 00:01:59.447 00:01:59.447 DPDK 24.03.0 00:01:59.447 00:01:59.447 User defined options 00:01:59.447 buildtype : debug 00:01:59.447 default_library : shared 00:01:59.447 libdir : lib 00:01:59.447 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:59.447 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:59.447 c_link_args : 00:01:59.447 cpu_instruction_set: native 00:01:59.447 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:59.447 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:59.447 enable_docs : false 00:01:59.448 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:59.448 enable_kmods : false 00:01:59.448 max_lcores : 128 00:01:59.448 tests : false 00:01:59.448 00:01:59.448 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:59.448 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:59.448 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:59.448 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:59.448 [3/268] Linking static target lib/librte_log.a 00:01:59.448 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:59.448 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:59.448 [6/268] Linking static target lib/librte_kvargs.a 00:01:59.448 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.448 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:59.448 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:59.448 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:59.448 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:59.448 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:59.448 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:59.448 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:59.448 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:59.448 [16/268] Linking static target lib/librte_telemetry.a 00:01:59.448 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.448 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:59.448 [19/268] Linking target lib/librte_log.so.24.1 00:01:59.448 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:59.706 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:59.706 [22/268] Linking target lib/librte_kvargs.so.24.1 00:01:59.964 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:59.964 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:59.964 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:59.964 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:59.964 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:59.964 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:59.964 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:00.222 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:00.222 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:00.222 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.222 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:00.481 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:00.481 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:00.739 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:00.739 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:00.997 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:00.997 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:00.997 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:00.997 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:00.997 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:00.997 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:00.997 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:01.255 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:01.255 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:01.255 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:01.514 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:01.514 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:01.514 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:02.080 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:02.080 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:02.080 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:02.080 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:02.354 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:02.354 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:02.354 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:02.354 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:02.354 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:02.618 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:02.618 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:02.875 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:02.875 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:03.133 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:03.133 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:03.133 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:03.390 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:03.390 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:03.648 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:03.648 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:03.648 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:03.648 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:03.906 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:03.906 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:03.906 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:03.906 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:03.906 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:04.472 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:04.472 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:04.472 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:04.472 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:04.472 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:04.729 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:04.729 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:04.729 [85/268] Linking static target lib/librte_ring.a 00:02:04.987 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:04.987 [87/268] Linking static target lib/librte_eal.a 00:02:05.244 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:05.244 [89/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.244 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:05.244 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:05.501 [92/268] Linking static target lib/librte_rcu.a 00:02:05.501 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:05.501 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:05.501 [95/268] Linking static target lib/librte_mempool.a 00:02:05.759 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:05.759 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:06.017 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:06.017 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:06.017 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:06.017 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.017 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:06.017 [103/268] Linking static target lib/librte_mbuf.a 00:02:06.274 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:06.274 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:06.274 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:06.532 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:06.532 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:06.532 [109/268] Linking static target lib/librte_meter.a 00:02:06.790 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:07.047 [111/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:07.047 [112/268] Linking static target lib/librte_net.a 00:02:07.047 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:07.047 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.047 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:07.047 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.304 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.304 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:07.561 [119/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.561 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:07.819 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:08.079 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:08.079 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:08.347 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:08.604 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:08.604 [126/268] Linking static target lib/librte_pci.a 00:02:08.604 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:08.604 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:08.861 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:08.861 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:08.861 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:08.861 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:08.861 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:08.861 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:08.861 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:08.861 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:08.861 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:09.118 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:09.118 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:09.118 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:09.118 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:09.118 [142/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.118 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:09.118 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:09.375 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:09.375 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:09.375 [147/268] Linking static target lib/librte_ethdev.a 00:02:09.633 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:09.891 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:09.891 [150/268] Linking static target lib/librte_timer.a 00:02:09.891 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:09.891 [152/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:09.891 [153/268] Linking static target lib/librte_cmdline.a 00:02:10.150 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:10.408 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:10.666 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:10.666 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:10.666 [158/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.924 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:10.924 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:10.924 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:10.924 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:10.924 [163/268] Linking static target lib/librte_hash.a 00:02:10.924 [164/268] Linking static target lib/librte_compressdev.a 00:02:11.489 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:11.489 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:11.489 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:11.746 [168/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:11.746 [169/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:12.004 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.004 [171/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:12.004 [172/268] Linking static target lib/librte_dmadev.a 00:02:12.004 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.261 [174/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:12.261 [175/268] Linking static target lib/librte_cryptodev.a 00:02:12.518 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:12.518 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:12.518 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.518 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:13.081 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:13.081 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:13.082 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:13.082 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.345 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:13.623 [185/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:13.623 [186/268] Linking static target lib/librte_security.a 00:02:13.623 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:13.623 [188/268] Linking static target lib/librte_reorder.a 00:02:13.623 [189/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:13.623 [190/268] Linking static target lib/librte_power.a 00:02:14.187 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:14.187 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:14.444 [193/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.444 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:14.444 [195/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.009 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:15.009 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.266 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:15.266 [199/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.266 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:15.524 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:15.782 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:15.782 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:15.782 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:15.782 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:15.782 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:16.040 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:16.297 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:16.297 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:16.555 [210/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:16.555 [211/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:16.555 [212/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:16.555 [213/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:16.555 [214/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:16.555 [215/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:16.812 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:16.812 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:16.812 [218/268] Linking static target drivers/librte_bus_pci.a 00:02:16.812 [219/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:16.812 [220/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.812 [221/268] Linking static target drivers/librte_mempool_ring.a 00:02:16.812 [222/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:16.812 [223/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:17.070 [224/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.070 [225/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.070 [226/268] Linking static target drivers/librte_bus_vdev.a 00:02:17.327 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.327 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.892 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.892 [230/268] Linking target lib/librte_eal.so.24.1 00:02:17.892 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:18.150 [232/268] Linking target lib/librte_timer.so.24.1 00:02:18.150 [233/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:18.150 [234/268] Linking target lib/librte_meter.so.24.1 00:02:18.150 [235/268] Linking target lib/librte_ring.so.24.1 00:02:18.150 [236/268] Linking target lib/librte_pci.so.24.1 00:02:18.150 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:18.150 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:18.150 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:18.150 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:18.150 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:18.150 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:18.150 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:18.433 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:18.433 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:18.433 [246/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:18.433 [247/268] Linking static target lib/librte_vhost.a 00:02:18.433 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:18.433 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:18.433 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:18.433 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:18.749 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:18.749 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:18.749 [254/268] Linking target lib/librte_net.so.24.1 00:02:18.749 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:18.749 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:19.006 [257/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.006 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:19.006 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:19.006 [260/268] Linking target lib/librte_hash.so.24.1 00:02:19.006 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:19.006 [262/268] Linking target lib/librte_security.so.24.1 00:02:19.006 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:19.006 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:19.264 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:19.264 [266/268] Linking target lib/librte_power.so.24.1 00:02:19.831 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.831 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:19.831 INFO: autodetecting backend as ninja 00:02:19.831 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:21.205 CC lib/ut_mock/mock.o 00:02:21.205 CC lib/log/log.o 00:02:21.205 CC lib/log/log_flags.o 00:02:21.205 CC lib/ut/ut.o 00:02:21.205 CC lib/log/log_deprecated.o 00:02:21.463 LIB libspdk_ut.a 00:02:21.463 LIB libspdk_ut_mock.a 00:02:21.463 SO libspdk_ut_mock.so.6.0 00:02:21.463 SO libspdk_ut.so.2.0 00:02:21.463 LIB libspdk_log.a 00:02:21.463 SO libspdk_log.so.7.0 00:02:21.463 SYMLINK libspdk_ut_mock.so 00:02:21.463 SYMLINK libspdk_ut.so 00:02:21.463 SYMLINK libspdk_log.so 00:02:21.721 CXX lib/trace_parser/trace.o 00:02:21.721 CC lib/dma/dma.o 00:02:21.721 CC lib/ioat/ioat.o 00:02:21.721 CC lib/util/base64.o 00:02:21.721 CC lib/util/bit_array.o 00:02:21.721 CC lib/util/cpuset.o 00:02:21.721 CC lib/util/crc16.o 00:02:21.721 CC lib/util/crc32.o 00:02:21.721 CC lib/util/crc32c.o 00:02:21.989 CC lib/vfio_user/host/vfio_user_pci.o 00:02:21.989 CC lib/util/crc32_ieee.o 00:02:21.989 CC lib/util/crc64.o 00:02:21.989 CC lib/util/dif.o 00:02:21.989 CC lib/vfio_user/host/vfio_user.o 00:02:22.251 CC lib/util/fd.o 00:02:22.251 CC lib/util/file.o 00:02:22.251 CC lib/util/hexlify.o 00:02:22.251 LIB libspdk_dma.a 00:02:22.251 SO libspdk_dma.so.4.0 00:02:22.251 CC lib/util/iov.o 00:02:22.251 SYMLINK libspdk_dma.so 00:02:22.251 CC lib/util/math.o 00:02:22.251 CC lib/util/pipe.o 00:02:22.251 CC lib/util/strerror_tls.o 00:02:22.251 LIB libspdk_ioat.a 00:02:22.251 SO libspdk_ioat.so.7.0 00:02:22.251 CC lib/util/string.o 00:02:22.251 LIB libspdk_vfio_user.a 00:02:22.251 CC lib/util/uuid.o 00:02:22.509 SYMLINK libspdk_ioat.so 00:02:22.509 CC lib/util/fd_group.o 00:02:22.509 CC lib/util/xor.o 00:02:22.509 SO libspdk_vfio_user.so.5.0 00:02:22.509 CC lib/util/zipf.o 00:02:22.509 SYMLINK libspdk_vfio_user.so 00:02:22.767 LIB libspdk_util.a 00:02:23.025 SO libspdk_util.so.9.1 00:02:23.025 LIB libspdk_trace_parser.a 00:02:23.025 SO libspdk_trace_parser.so.5.0 00:02:23.284 SYMLINK libspdk_util.so 00:02:23.284 SYMLINK libspdk_trace_parser.so 00:02:23.542 CC lib/json/json_parse.o 00:02:23.542 CC lib/json/json_util.o 00:02:23.542 CC lib/json/json_write.o 00:02:23.542 CC lib/idxd/idxd_user.o 00:02:23.542 CC lib/idxd/idxd.o 00:02:23.542 CC lib/rdma_utils/rdma_utils.o 00:02:23.542 CC lib/conf/conf.o 00:02:23.542 CC lib/env_dpdk/env.o 00:02:23.542 CC lib/vmd/vmd.o 00:02:23.542 CC lib/rdma_provider/common.o 00:02:23.800 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:23.800 CC lib/vmd/led.o 00:02:23.800 LIB libspdk_conf.a 00:02:23.800 CC lib/env_dpdk/memory.o 00:02:23.800 SO libspdk_conf.so.6.0 00:02:23.800 LIB libspdk_json.a 00:02:23.800 CC lib/idxd/idxd_kernel.o 00:02:24.059 SO libspdk_json.so.6.0 00:02:24.059 LIB libspdk_rdma_utils.a 00:02:24.059 SYMLINK libspdk_conf.so 00:02:24.059 CC lib/env_dpdk/pci.o 00:02:24.059 CC lib/env_dpdk/init.o 00:02:24.059 SO libspdk_rdma_utils.so.1.0 00:02:24.059 SYMLINK libspdk_json.so 00:02:24.059 SYMLINK libspdk_rdma_utils.so 00:02:24.059 LIB libspdk_rdma_provider.a 00:02:24.059 CC lib/env_dpdk/threads.o 00:02:24.059 SO libspdk_rdma_provider.so.6.0 00:02:24.317 CC lib/jsonrpc/jsonrpc_server.o 00:02:24.317 SYMLINK libspdk_rdma_provider.so 00:02:24.317 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:24.317 CC lib/env_dpdk/pci_ioat.o 00:02:24.317 CC lib/env_dpdk/pci_virtio.o 00:02:24.317 CC lib/env_dpdk/pci_vmd.o 00:02:24.317 LIB libspdk_idxd.a 00:02:24.317 CC lib/jsonrpc/jsonrpc_client.o 00:02:24.575 SO libspdk_idxd.so.12.0 00:02:24.575 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:24.575 SYMLINK libspdk_idxd.so 00:02:24.575 CC lib/env_dpdk/pci_idxd.o 00:02:24.575 CC lib/env_dpdk/pci_event.o 00:02:24.575 LIB libspdk_vmd.a 00:02:24.575 CC lib/env_dpdk/sigbus_handler.o 00:02:24.575 CC lib/env_dpdk/pci_dpdk.o 00:02:24.575 SO libspdk_vmd.so.6.0 00:02:24.575 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:24.575 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:24.834 SYMLINK libspdk_vmd.so 00:02:24.834 LIB libspdk_jsonrpc.a 00:02:24.834 SO libspdk_jsonrpc.so.6.0 00:02:24.834 SYMLINK libspdk_jsonrpc.so 00:02:25.093 CC lib/rpc/rpc.o 00:02:25.352 LIB libspdk_rpc.a 00:02:25.352 SO libspdk_rpc.so.6.0 00:02:25.611 SYMLINK libspdk_rpc.so 00:02:25.611 CC lib/trace/trace.o 00:02:25.611 CC lib/notify/notify.o 00:02:25.611 CC lib/trace/trace_flags.o 00:02:25.611 CC lib/notify/notify_rpc.o 00:02:25.611 CC lib/trace/trace_rpc.o 00:02:25.869 CC lib/keyring/keyring_rpc.o 00:02:25.869 CC lib/keyring/keyring.o 00:02:25.869 LIB libspdk_env_dpdk.a 00:02:25.869 LIB libspdk_notify.a 00:02:26.138 SO libspdk_notify.so.6.0 00:02:26.138 LIB libspdk_keyring.a 00:02:26.138 SO libspdk_env_dpdk.so.14.1 00:02:26.138 LIB libspdk_trace.a 00:02:26.138 SO libspdk_keyring.so.1.0 00:02:26.138 SYMLINK libspdk_notify.so 00:02:26.138 SO libspdk_trace.so.10.0 00:02:26.138 SYMLINK libspdk_keyring.so 00:02:26.138 SYMLINK libspdk_env_dpdk.so 00:02:26.138 SYMLINK libspdk_trace.so 00:02:26.412 CC lib/thread/thread.o 00:02:26.412 CC lib/sock/sock_rpc.o 00:02:26.412 CC lib/sock/sock.o 00:02:26.412 CC lib/thread/iobuf.o 00:02:26.978 LIB libspdk_sock.a 00:02:26.978 SO libspdk_sock.so.10.0 00:02:27.235 SYMLINK libspdk_sock.so 00:02:27.492 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:27.492 CC lib/nvme/nvme_ctrlr.o 00:02:27.492 CC lib/nvme/nvme_fabric.o 00:02:27.492 CC lib/nvme/nvme_ns.o 00:02:27.492 CC lib/nvme/nvme_pcie_common.o 00:02:27.492 CC lib/nvme/nvme_pcie.o 00:02:27.492 CC lib/nvme/nvme.o 00:02:27.492 CC lib/nvme/nvme_qpair.o 00:02:27.492 CC lib/nvme/nvme_ns_cmd.o 00:02:28.423 CC lib/nvme/nvme_quirks.o 00:02:28.423 CC lib/nvme/nvme_transport.o 00:02:28.681 CC lib/nvme/nvme_discovery.o 00:02:28.681 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:28.681 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:28.681 CC lib/nvme/nvme_tcp.o 00:02:28.938 LIB libspdk_thread.a 00:02:28.938 CC lib/nvme/nvme_opal.o 00:02:28.938 SO libspdk_thread.so.10.1 00:02:29.196 SYMLINK libspdk_thread.so 00:02:29.196 CC lib/nvme/nvme_io_msg.o 00:02:29.196 CC lib/nvme/nvme_poll_group.o 00:02:29.196 CC lib/nvme/nvme_zns.o 00:02:29.453 CC lib/nvme/nvme_stubs.o 00:02:29.711 CC lib/accel/accel.o 00:02:29.711 CC lib/blob/blobstore.o 00:02:29.969 CC lib/blob/request.o 00:02:29.969 CC lib/init/json_config.o 00:02:30.227 CC lib/accel/accel_rpc.o 00:02:30.227 CC lib/virtio/virtio.o 00:02:30.227 CC lib/virtio/virtio_vhost_user.o 00:02:30.485 CC lib/virtio/virtio_vfio_user.o 00:02:30.485 CC lib/init/subsystem.o 00:02:30.485 CC lib/init/subsystem_rpc.o 00:02:30.485 CC lib/accel/accel_sw.o 00:02:30.742 CC lib/nvme/nvme_auth.o 00:02:30.742 CC lib/virtio/virtio_pci.o 00:02:30.742 CC lib/nvme/nvme_cuse.o 00:02:30.742 CC lib/init/rpc.o 00:02:31.001 CC lib/blob/zeroes.o 00:02:31.001 CC lib/nvme/nvme_rdma.o 00:02:31.001 CC lib/blob/blob_bs_dev.o 00:02:31.001 LIB libspdk_init.a 00:02:31.001 SO libspdk_init.so.5.0 00:02:31.001 LIB libspdk_accel.a 00:02:31.260 SO libspdk_accel.so.15.1 00:02:31.260 SYMLINK libspdk_init.so 00:02:31.260 LIB libspdk_virtio.a 00:02:31.260 SYMLINK libspdk_accel.so 00:02:31.260 SO libspdk_virtio.so.7.0 00:02:31.519 SYMLINK libspdk_virtio.so 00:02:31.519 CC lib/event/app.o 00:02:31.519 CC lib/event/reactor.o 00:02:31.519 CC lib/event/log_rpc.o 00:02:31.519 CC lib/event/app_rpc.o 00:02:31.519 CC lib/event/scheduler_static.o 00:02:31.519 CC lib/bdev/bdev.o 00:02:31.777 CC lib/bdev/bdev_rpc.o 00:02:31.777 CC lib/bdev/bdev_zone.o 00:02:32.034 CC lib/bdev/part.o 00:02:32.034 CC lib/bdev/scsi_nvme.o 00:02:32.034 LIB libspdk_event.a 00:02:32.292 SO libspdk_event.so.14.0 00:02:32.292 SYMLINK libspdk_event.so 00:02:33.224 LIB libspdk_nvme.a 00:02:33.483 SO libspdk_nvme.so.13.1 00:02:33.741 SYMLINK libspdk_nvme.so 00:02:34.674 LIB libspdk_blob.a 00:02:34.932 SO libspdk_blob.so.11.0 00:02:34.932 SYMLINK libspdk_blob.so 00:02:34.932 LIB libspdk_bdev.a 00:02:35.190 SO libspdk_bdev.so.15.1 00:02:35.190 CC lib/lvol/lvol.o 00:02:35.190 CC lib/blobfs/blobfs.o 00:02:35.190 CC lib/blobfs/tree.o 00:02:35.190 SYMLINK libspdk_bdev.so 00:02:35.449 CC lib/scsi/dev.o 00:02:35.449 CC lib/scsi/lun.o 00:02:35.449 CC lib/scsi/port.o 00:02:35.449 CC lib/ftl/ftl_core.o 00:02:35.449 CC lib/ublk/ublk.o 00:02:35.449 CC lib/nbd/nbd.o 00:02:35.449 CC lib/nvmf/ctrlr.o 00:02:35.449 CC lib/nbd/nbd_rpc.o 00:02:35.707 CC lib/scsi/scsi.o 00:02:35.707 CC lib/scsi/scsi_bdev.o 00:02:35.707 CC lib/scsi/scsi_pr.o 00:02:35.965 CC lib/ftl/ftl_init.o 00:02:35.965 CC lib/scsi/scsi_rpc.o 00:02:35.965 CC lib/nvmf/ctrlr_discovery.o 00:02:36.223 LIB libspdk_nbd.a 00:02:36.223 SO libspdk_nbd.so.7.0 00:02:36.223 CC lib/ftl/ftl_layout.o 00:02:36.223 CC lib/nvmf/ctrlr_bdev.o 00:02:36.223 SYMLINK libspdk_nbd.so 00:02:36.223 CC lib/nvmf/subsystem.o 00:02:36.481 CC lib/ublk/ublk_rpc.o 00:02:36.481 CC lib/nvmf/nvmf.o 00:02:36.481 LIB libspdk_lvol.a 00:02:36.481 LIB libspdk_blobfs.a 00:02:36.481 SO libspdk_lvol.so.10.0 00:02:36.481 SO libspdk_blobfs.so.10.0 00:02:36.739 SYMLINK libspdk_lvol.so 00:02:36.739 SYMLINK libspdk_blobfs.so 00:02:36.739 CC lib/nvmf/nvmf_rpc.o 00:02:36.739 CC lib/ftl/ftl_debug.o 00:02:36.739 CC lib/scsi/task.o 00:02:36.739 LIB libspdk_ublk.a 00:02:36.739 SO libspdk_ublk.so.3.0 00:02:36.739 CC lib/nvmf/transport.o 00:02:36.739 SYMLINK libspdk_ublk.so 00:02:36.998 CC lib/ftl/ftl_io.o 00:02:36.998 LIB libspdk_scsi.a 00:02:36.998 CC lib/ftl/ftl_sb.o 00:02:36.998 CC lib/nvmf/tcp.o 00:02:36.998 SO libspdk_scsi.so.9.0 00:02:37.257 SYMLINK libspdk_scsi.so 00:02:37.257 CC lib/nvmf/stubs.o 00:02:37.257 CC lib/ftl/ftl_l2p.o 00:02:37.257 CC lib/ftl/ftl_l2p_flat.o 00:02:37.515 CC lib/ftl/ftl_nv_cache.o 00:02:37.773 CC lib/ftl/ftl_band.o 00:02:37.773 CC lib/ftl/ftl_band_ops.o 00:02:37.773 CC lib/iscsi/conn.o 00:02:37.773 CC lib/vhost/vhost.o 00:02:38.031 CC lib/nvmf/mdns_server.o 00:02:38.289 CC lib/vhost/vhost_rpc.o 00:02:38.289 CC lib/vhost/vhost_scsi.o 00:02:38.289 CC lib/iscsi/init_grp.o 00:02:38.548 CC lib/nvmf/rdma.o 00:02:38.548 CC lib/vhost/vhost_blk.o 00:02:38.806 CC lib/vhost/rte_vhost_user.o 00:02:38.806 CC lib/iscsi/iscsi.o 00:02:38.806 CC lib/ftl/ftl_writer.o 00:02:38.806 CC lib/nvmf/auth.o 00:02:39.432 CC lib/ftl/ftl_rq.o 00:02:39.432 CC lib/ftl/ftl_reloc.o 00:02:39.432 CC lib/iscsi/md5.o 00:02:39.432 CC lib/ftl/ftl_l2p_cache.o 00:02:39.692 CC lib/ftl/ftl_p2l.o 00:02:39.692 CC lib/ftl/mngt/ftl_mngt.o 00:02:39.952 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:39.952 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:40.211 CC lib/iscsi/param.o 00:02:40.211 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:40.211 CC lib/iscsi/portal_grp.o 00:02:40.211 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:40.211 CC lib/iscsi/tgt_node.o 00:02:40.470 CC lib/iscsi/iscsi_subsystem.o 00:02:40.470 CC lib/iscsi/iscsi_rpc.o 00:02:40.470 LIB libspdk_vhost.a 00:02:40.470 CC lib/iscsi/task.o 00:02:40.729 SO libspdk_vhost.so.8.0 00:02:40.729 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:40.729 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:40.987 SYMLINK libspdk_vhost.so 00:02:40.987 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:40.987 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:40.987 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:41.245 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:41.245 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:41.245 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:41.245 CC lib/ftl/utils/ftl_conf.o 00:02:41.245 CC lib/ftl/utils/ftl_md.o 00:02:41.245 CC lib/ftl/utils/ftl_mempool.o 00:02:41.245 CC lib/ftl/utils/ftl_bitmap.o 00:02:41.245 CC lib/ftl/utils/ftl_property.o 00:02:41.245 LIB libspdk_iscsi.a 00:02:41.504 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:41.504 SO libspdk_iscsi.so.8.0 00:02:41.504 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:41.504 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:41.504 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:41.762 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:41.762 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:41.762 SYMLINK libspdk_iscsi.so 00:02:41.762 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:42.019 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:42.019 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:42.019 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:42.019 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:42.019 CC lib/ftl/base/ftl_base_dev.o 00:02:42.019 CC lib/ftl/base/ftl_base_bdev.o 00:02:42.019 CC lib/ftl/ftl_trace.o 00:02:42.583 LIB libspdk_nvmf.a 00:02:42.583 LIB libspdk_ftl.a 00:02:42.583 SO libspdk_nvmf.so.19.0 00:02:42.840 SO libspdk_ftl.so.9.0 00:02:42.840 SYMLINK libspdk_nvmf.so 00:02:43.407 SYMLINK libspdk_ftl.so 00:02:43.664 CC module/env_dpdk/env_dpdk_rpc.o 00:02:43.926 CC module/accel/dsa/accel_dsa.o 00:02:43.926 CC module/blob/bdev/blob_bdev.o 00:02:43.926 CC module/accel/error/accel_error.o 00:02:43.926 CC module/accel/iaa/accel_iaa.o 00:02:43.926 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:43.926 CC module/sock/posix/posix.o 00:02:43.926 CC module/accel/ioat/accel_ioat.o 00:02:43.926 CC module/keyring/file/keyring.o 00:02:43.926 CC module/keyring/linux/keyring.o 00:02:43.926 LIB libspdk_env_dpdk_rpc.a 00:02:43.926 SO libspdk_env_dpdk_rpc.so.6.0 00:02:44.183 SYMLINK libspdk_env_dpdk_rpc.so 00:02:44.183 CC module/keyring/linux/keyring_rpc.o 00:02:44.183 CC module/keyring/file/keyring_rpc.o 00:02:44.183 LIB libspdk_blob_bdev.a 00:02:44.183 CC module/accel/error/accel_error_rpc.o 00:02:44.183 CC module/accel/iaa/accel_iaa_rpc.o 00:02:44.183 SO libspdk_blob_bdev.so.11.0 00:02:44.183 CC module/accel/ioat/accel_ioat_rpc.o 00:02:44.183 LIB libspdk_scheduler_dynamic.a 00:02:44.183 CC module/accel/dsa/accel_dsa_rpc.o 00:02:44.183 SO libspdk_scheduler_dynamic.so.4.0 00:02:44.183 SYMLINK libspdk_blob_bdev.so 00:02:44.442 LIB libspdk_keyring_linux.a 00:02:44.442 SYMLINK libspdk_scheduler_dynamic.so 00:02:44.442 LIB libspdk_keyring_file.a 00:02:44.442 SO libspdk_keyring_linux.so.1.0 00:02:44.442 LIB libspdk_accel_error.a 00:02:44.442 LIB libspdk_accel_iaa.a 00:02:44.442 SO libspdk_keyring_file.so.1.0 00:02:44.442 LIB libspdk_accel_ioat.a 00:02:44.442 SO libspdk_accel_error.so.2.0 00:02:44.442 SO libspdk_accel_iaa.so.3.0 00:02:44.442 SO libspdk_accel_ioat.so.6.0 00:02:44.442 LIB libspdk_accel_dsa.a 00:02:44.442 SYMLINK libspdk_keyring_linux.so 00:02:44.442 SO libspdk_accel_dsa.so.5.0 00:02:44.442 SYMLINK libspdk_keyring_file.so 00:02:44.442 SYMLINK libspdk_accel_iaa.so 00:02:44.700 SYMLINK libspdk_accel_error.so 00:02:44.700 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:44.700 CC module/scheduler/gscheduler/gscheduler.o 00:02:44.700 SYMLINK libspdk_accel_ioat.so 00:02:44.700 SYMLINK libspdk_accel_dsa.so 00:02:44.700 CC module/bdev/delay/vbdev_delay.o 00:02:44.700 CC module/bdev/error/vbdev_error.o 00:02:44.958 LIB libspdk_scheduler_gscheduler.a 00:02:44.958 LIB libspdk_scheduler_dpdk_governor.a 00:02:44.958 CC module/bdev/lvol/vbdev_lvol.o 00:02:44.958 CC module/bdev/gpt/gpt.o 00:02:44.958 CC module/bdev/malloc/bdev_malloc.o 00:02:44.958 SO libspdk_scheduler_gscheduler.so.4.0 00:02:44.958 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:44.958 CC module/bdev/null/bdev_null.o 00:02:44.958 CC module/blobfs/bdev/blobfs_bdev.o 00:02:44.958 SYMLINK libspdk_scheduler_gscheduler.so 00:02:44.958 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:44.958 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:44.958 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:45.216 LIB libspdk_sock_posix.a 00:02:45.216 CC module/bdev/error/vbdev_error_rpc.o 00:02:45.216 CC module/bdev/gpt/vbdev_gpt.o 00:02:45.216 SO libspdk_sock_posix.so.6.0 00:02:45.216 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:45.216 CC module/bdev/null/bdev_null_rpc.o 00:02:45.474 LIB libspdk_blobfs_bdev.a 00:02:45.474 SYMLINK libspdk_sock_posix.so 00:02:45.474 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:45.474 LIB libspdk_bdev_delay.a 00:02:45.474 SO libspdk_blobfs_bdev.so.6.0 00:02:45.474 SO libspdk_bdev_delay.so.6.0 00:02:45.474 LIB libspdk_bdev_error.a 00:02:45.474 SYMLINK libspdk_blobfs_bdev.so 00:02:45.731 SO libspdk_bdev_error.so.6.0 00:02:45.731 SYMLINK libspdk_bdev_delay.so 00:02:45.731 LIB libspdk_bdev_malloc.a 00:02:45.731 LIB libspdk_bdev_gpt.a 00:02:45.731 LIB libspdk_bdev_null.a 00:02:45.731 SO libspdk_bdev_gpt.so.6.0 00:02:45.731 SO libspdk_bdev_malloc.so.6.0 00:02:45.731 SYMLINK libspdk_bdev_error.so 00:02:45.731 CC module/bdev/nvme/bdev_nvme.o 00:02:45.731 SO libspdk_bdev_null.so.6.0 00:02:45.731 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:45.731 SYMLINK libspdk_bdev_gpt.so 00:02:45.731 SYMLINK libspdk_bdev_malloc.so 00:02:45.989 CC module/bdev/nvme/nvme_rpc.o 00:02:45.989 CC module/bdev/nvme/bdev_mdns_client.o 00:02:45.989 CC module/bdev/passthru/vbdev_passthru.o 00:02:45.989 SYMLINK libspdk_bdev_null.so 00:02:45.989 CC module/bdev/raid/bdev_raid.o 00:02:45.989 CC module/bdev/split/vbdev_split.o 00:02:45.989 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:45.989 LIB libspdk_bdev_lvol.a 00:02:46.247 SO libspdk_bdev_lvol.so.6.0 00:02:46.247 CC module/bdev/aio/bdev_aio.o 00:02:46.247 SYMLINK libspdk_bdev_lvol.so 00:02:46.247 CC module/bdev/raid/bdev_raid_rpc.o 00:02:46.247 CC module/bdev/raid/bdev_raid_sb.o 00:02:46.247 CC module/bdev/split/vbdev_split_rpc.o 00:02:46.505 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:46.505 CC module/bdev/ftl/bdev_ftl.o 00:02:46.505 CC module/bdev/raid/raid0.o 00:02:46.763 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:46.763 LIB libspdk_bdev_split.a 00:02:46.763 SO libspdk_bdev_split.so.6.0 00:02:46.763 LIB libspdk_bdev_passthru.a 00:02:46.763 CC module/bdev/aio/bdev_aio_rpc.o 00:02:46.763 CC module/bdev/raid/raid1.o 00:02:46.763 SO libspdk_bdev_passthru.so.6.0 00:02:46.763 SYMLINK libspdk_bdev_split.so 00:02:46.763 CC module/bdev/raid/concat.o 00:02:46.763 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:47.021 SYMLINK libspdk_bdev_passthru.so 00:02:47.021 LIB libspdk_bdev_zone_block.a 00:02:47.021 SO libspdk_bdev_zone_block.so.6.0 00:02:47.021 LIB libspdk_bdev_aio.a 00:02:47.021 CC module/bdev/nvme/vbdev_opal.o 00:02:47.021 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:47.021 SO libspdk_bdev_aio.so.6.0 00:02:47.021 SYMLINK libspdk_bdev_zone_block.so 00:02:47.279 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:47.279 CC module/bdev/iscsi/bdev_iscsi.o 00:02:47.279 SYMLINK libspdk_bdev_aio.so 00:02:47.279 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:47.279 LIB libspdk_bdev_ftl.a 00:02:47.279 SO libspdk_bdev_ftl.so.6.0 00:02:47.540 SYMLINK libspdk_bdev_ftl.so 00:02:47.540 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:47.540 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:47.540 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:47.540 LIB libspdk_bdev_raid.a 00:02:47.805 SO libspdk_bdev_raid.so.6.0 00:02:47.805 SYMLINK libspdk_bdev_raid.so 00:02:47.805 LIB libspdk_bdev_iscsi.a 00:02:47.805 SO libspdk_bdev_iscsi.so.6.0 00:02:48.062 SYMLINK libspdk_bdev_iscsi.so 00:02:48.321 LIB libspdk_bdev_virtio.a 00:02:48.321 SO libspdk_bdev_virtio.so.6.0 00:02:48.584 SYMLINK libspdk_bdev_virtio.so 00:02:49.544 LIB libspdk_bdev_nvme.a 00:02:49.544 SO libspdk_bdev_nvme.so.7.0 00:02:49.544 SYMLINK libspdk_bdev_nvme.so 00:02:50.109 CC module/event/subsystems/iobuf/iobuf.o 00:02:50.109 CC module/event/subsystems/scheduler/scheduler.o 00:02:50.109 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:50.109 CC module/event/subsystems/vmd/vmd.o 00:02:50.109 CC module/event/subsystems/sock/sock.o 00:02:50.109 CC module/event/subsystems/keyring/keyring.o 00:02:50.109 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:50.109 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:50.109 LIB libspdk_event_scheduler.a 00:02:50.109 LIB libspdk_event_keyring.a 00:02:50.109 LIB libspdk_event_vmd.a 00:02:50.366 LIB libspdk_event_sock.a 00:02:50.366 LIB libspdk_event_vhost_blk.a 00:02:50.366 SO libspdk_event_keyring.so.1.0 00:02:50.366 SO libspdk_event_scheduler.so.4.0 00:02:50.366 LIB libspdk_event_iobuf.a 00:02:50.366 SO libspdk_event_vmd.so.6.0 00:02:50.366 SO libspdk_event_vhost_blk.so.3.0 00:02:50.366 SO libspdk_event_sock.so.5.0 00:02:50.366 SO libspdk_event_iobuf.so.3.0 00:02:50.366 SYMLINK libspdk_event_vmd.so 00:02:50.366 SYMLINK libspdk_event_scheduler.so 00:02:50.366 SYMLINK libspdk_event_keyring.so 00:02:50.366 SYMLINK libspdk_event_vhost_blk.so 00:02:50.366 SYMLINK libspdk_event_sock.so 00:02:50.366 SYMLINK libspdk_event_iobuf.so 00:02:50.623 CC module/event/subsystems/accel/accel.o 00:02:50.879 LIB libspdk_event_accel.a 00:02:50.879 SO libspdk_event_accel.so.6.0 00:02:50.879 SYMLINK libspdk_event_accel.so 00:02:51.136 CC module/event/subsystems/bdev/bdev.o 00:02:51.393 LIB libspdk_event_bdev.a 00:02:51.393 SO libspdk_event_bdev.so.6.0 00:02:51.393 SYMLINK libspdk_event_bdev.so 00:02:51.650 CC module/event/subsystems/ublk/ublk.o 00:02:51.650 CC module/event/subsystems/scsi/scsi.o 00:02:51.650 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:51.650 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:51.650 CC module/event/subsystems/nbd/nbd.o 00:02:51.906 LIB libspdk_event_ublk.a 00:02:51.906 LIB libspdk_event_nbd.a 00:02:51.906 SO libspdk_event_ublk.so.3.0 00:02:51.906 LIB libspdk_event_scsi.a 00:02:51.906 SO libspdk_event_nbd.so.6.0 00:02:52.165 SO libspdk_event_scsi.so.6.0 00:02:52.165 SYMLINK libspdk_event_ublk.so 00:02:52.165 SYMLINK libspdk_event_nbd.so 00:02:52.165 LIB libspdk_event_nvmf.a 00:02:52.165 SYMLINK libspdk_event_scsi.so 00:02:52.165 SO libspdk_event_nvmf.so.6.0 00:02:52.165 SYMLINK libspdk_event_nvmf.so 00:02:52.428 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:52.428 CC module/event/subsystems/iscsi/iscsi.o 00:02:52.428 LIB libspdk_event_vhost_scsi.a 00:02:52.686 SO libspdk_event_vhost_scsi.so.3.0 00:02:52.686 LIB libspdk_event_iscsi.a 00:02:52.686 SYMLINK libspdk_event_vhost_scsi.so 00:02:52.686 SO libspdk_event_iscsi.so.6.0 00:02:52.686 SYMLINK libspdk_event_iscsi.so 00:02:52.944 SO libspdk.so.6.0 00:02:52.944 SYMLINK libspdk.so 00:02:53.202 CXX app/trace/trace.o 00:02:53.202 CC app/trace_record/trace_record.o 00:02:53.202 CC app/iscsi_tgt/iscsi_tgt.o 00:02:53.202 CC app/nvmf_tgt/nvmf_main.o 00:02:53.202 CC examples/ioat/perf/perf.o 00:02:53.202 CC examples/util/zipf/zipf.o 00:02:53.202 CC test/thread/poller_perf/poller_perf.o 00:02:53.202 CC app/spdk_tgt/spdk_tgt.o 00:02:53.202 CC test/dma/test_dma/test_dma.o 00:02:53.460 LINK zipf 00:02:53.460 LINK nvmf_tgt 00:02:53.460 LINK poller_perf 00:02:53.460 LINK iscsi_tgt 00:02:53.460 LINK ioat_perf 00:02:53.460 LINK spdk_tgt 00:02:53.717 LINK spdk_trace_record 00:02:53.717 LINK spdk_trace 00:02:53.717 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:53.717 CC examples/ioat/verify/verify.o 00:02:53.976 CC app/spdk_lspci/spdk_lspci.o 00:02:53.976 LINK test_dma 00:02:53.976 CC examples/sock/hello_world/hello_sock.o 00:02:53.976 CC app/spdk_nvme_perf/perf.o 00:02:53.976 CC examples/thread/thread/thread_ex.o 00:02:53.976 LINK spdk_lspci 00:02:53.976 LINK interrupt_tgt 00:02:54.234 CC examples/vmd/lsvmd/lsvmd.o 00:02:54.234 LINK verify 00:02:54.234 LINK hello_sock 00:02:54.234 LINK lsvmd 00:02:54.234 CC examples/idxd/perf/perf.o 00:02:54.492 LINK thread 00:02:54.492 CC app/spdk_nvme_identify/identify.o 00:02:54.492 CC examples/vmd/led/led.o 00:02:54.492 CC app/spdk_nvme_discover/discovery_aer.o 00:02:54.750 CC test/app/bdev_svc/bdev_svc.o 00:02:55.009 LINK led 00:02:55.009 CC examples/nvme/hello_world/hello_world.o 00:02:55.009 CC test/blobfs/mkfs/mkfs.o 00:02:55.009 LINK spdk_nvme_discover 00:02:55.009 CC examples/nvme/reconnect/reconnect.o 00:02:55.009 LINK idxd_perf 00:02:55.009 LINK spdk_nvme_perf 00:02:55.268 LINK bdev_svc 00:02:55.268 LINK mkfs 00:02:55.268 LINK hello_world 00:02:55.526 TEST_HEADER include/spdk/accel.h 00:02:55.526 TEST_HEADER include/spdk/accel_module.h 00:02:55.526 TEST_HEADER include/spdk/assert.h 00:02:55.526 TEST_HEADER include/spdk/barrier.h 00:02:55.526 TEST_HEADER include/spdk/base64.h 00:02:55.526 TEST_HEADER include/spdk/bdev.h 00:02:55.526 TEST_HEADER include/spdk/bdev_module.h 00:02:55.526 TEST_HEADER include/spdk/bdev_zone.h 00:02:55.526 TEST_HEADER include/spdk/bit_array.h 00:02:55.526 TEST_HEADER include/spdk/bit_pool.h 00:02:55.526 TEST_HEADER include/spdk/blob_bdev.h 00:02:55.526 CC examples/accel/perf/accel_perf.o 00:02:55.526 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:55.526 TEST_HEADER include/spdk/blobfs.h 00:02:55.526 TEST_HEADER include/spdk/blob.h 00:02:55.526 TEST_HEADER include/spdk/conf.h 00:02:55.526 TEST_HEADER include/spdk/config.h 00:02:55.526 TEST_HEADER include/spdk/cpuset.h 00:02:55.526 TEST_HEADER include/spdk/crc16.h 00:02:55.526 TEST_HEADER include/spdk/crc32.h 00:02:55.526 TEST_HEADER include/spdk/crc64.h 00:02:55.526 TEST_HEADER include/spdk/dif.h 00:02:55.526 TEST_HEADER include/spdk/dma.h 00:02:55.526 TEST_HEADER include/spdk/endian.h 00:02:55.526 TEST_HEADER include/spdk/env_dpdk.h 00:02:55.526 TEST_HEADER include/spdk/env.h 00:02:55.526 TEST_HEADER include/spdk/event.h 00:02:55.526 LINK reconnect 00:02:55.526 TEST_HEADER include/spdk/fd_group.h 00:02:55.526 TEST_HEADER include/spdk/fd.h 00:02:55.526 TEST_HEADER include/spdk/file.h 00:02:55.526 TEST_HEADER include/spdk/ftl.h 00:02:55.526 TEST_HEADER include/spdk/gpt_spec.h 00:02:55.526 TEST_HEADER include/spdk/hexlify.h 00:02:55.526 TEST_HEADER include/spdk/histogram_data.h 00:02:55.526 TEST_HEADER include/spdk/idxd.h 00:02:55.526 TEST_HEADER include/spdk/idxd_spec.h 00:02:55.526 TEST_HEADER include/spdk/init.h 00:02:55.526 TEST_HEADER include/spdk/ioat.h 00:02:55.526 TEST_HEADER include/spdk/ioat_spec.h 00:02:55.526 TEST_HEADER include/spdk/iscsi_spec.h 00:02:55.526 TEST_HEADER include/spdk/json.h 00:02:55.526 TEST_HEADER include/spdk/jsonrpc.h 00:02:55.526 TEST_HEADER include/spdk/keyring.h 00:02:55.526 TEST_HEADER include/spdk/keyring_module.h 00:02:55.526 TEST_HEADER include/spdk/likely.h 00:02:55.526 TEST_HEADER include/spdk/log.h 00:02:55.526 TEST_HEADER include/spdk/lvol.h 00:02:55.526 TEST_HEADER include/spdk/memory.h 00:02:55.526 TEST_HEADER include/spdk/mmio.h 00:02:55.526 TEST_HEADER include/spdk/nbd.h 00:02:55.526 CC examples/blob/hello_world/hello_blob.o 00:02:55.526 TEST_HEADER include/spdk/notify.h 00:02:55.526 TEST_HEADER include/spdk/nvme.h 00:02:55.526 TEST_HEADER include/spdk/nvme_intel.h 00:02:55.526 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:55.526 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:55.526 TEST_HEADER include/spdk/nvme_spec.h 00:02:55.526 TEST_HEADER include/spdk/nvme_zns.h 00:02:55.526 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:55.526 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:55.526 TEST_HEADER include/spdk/nvmf.h 00:02:55.526 TEST_HEADER include/spdk/nvmf_spec.h 00:02:55.526 TEST_HEADER include/spdk/nvmf_transport.h 00:02:55.784 TEST_HEADER include/spdk/opal.h 00:02:55.784 CC examples/blob/cli/blobcli.o 00:02:55.784 TEST_HEADER include/spdk/opal_spec.h 00:02:55.784 TEST_HEADER include/spdk/pci_ids.h 00:02:55.784 TEST_HEADER include/spdk/pipe.h 00:02:55.784 TEST_HEADER include/spdk/queue.h 00:02:55.784 TEST_HEADER include/spdk/reduce.h 00:02:55.784 TEST_HEADER include/spdk/rpc.h 00:02:55.784 TEST_HEADER include/spdk/scheduler.h 00:02:55.784 TEST_HEADER include/spdk/scsi.h 00:02:55.784 TEST_HEADER include/spdk/scsi_spec.h 00:02:55.784 TEST_HEADER include/spdk/sock.h 00:02:55.784 TEST_HEADER include/spdk/stdinc.h 00:02:55.784 TEST_HEADER include/spdk/string.h 00:02:55.784 TEST_HEADER include/spdk/thread.h 00:02:55.784 TEST_HEADER include/spdk/trace.h 00:02:55.784 TEST_HEADER include/spdk/trace_parser.h 00:02:55.784 TEST_HEADER include/spdk/tree.h 00:02:55.784 TEST_HEADER include/spdk/ublk.h 00:02:55.784 TEST_HEADER include/spdk/util.h 00:02:55.784 TEST_HEADER include/spdk/uuid.h 00:02:55.784 TEST_HEADER include/spdk/version.h 00:02:55.784 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:55.784 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:55.784 TEST_HEADER include/spdk/vhost.h 00:02:55.784 TEST_HEADER include/spdk/vmd.h 00:02:55.784 TEST_HEADER include/spdk/xor.h 00:02:55.784 TEST_HEADER include/spdk/zipf.h 00:02:55.784 CXX test/cpp_headers/accel.o 00:02:55.784 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:55.784 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:56.042 LINK hello_blob 00:02:56.042 CXX test/cpp_headers/accel_module.o 00:02:56.042 CC test/event/event_perf/event_perf.o 00:02:56.042 LINK spdk_nvme_identify 00:02:56.300 CC test/env/mem_callbacks/mem_callbacks.o 00:02:56.300 CXX test/cpp_headers/assert.o 00:02:56.300 LINK accel_perf 00:02:56.300 LINK blobcli 00:02:56.558 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:56.558 LINK event_perf 00:02:56.558 CC app/spdk_top/spdk_top.o 00:02:56.558 LINK nvme_fuzz 00:02:56.816 CXX test/cpp_headers/barrier.o 00:02:56.816 LINK nvme_manage 00:02:56.816 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:56.816 CC test/env/vtophys/vtophys.o 00:02:57.074 CC test/event/reactor/reactor.o 00:02:57.074 CXX test/cpp_headers/base64.o 00:02:57.074 LINK vtophys 00:02:57.074 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:57.332 LINK mem_callbacks 00:02:57.332 CC examples/nvme/arbitration/arbitration.o 00:02:57.332 LINK reactor 00:02:57.332 CC test/event/reactor_perf/reactor_perf.o 00:02:57.332 CXX test/cpp_headers/bdev.o 00:02:57.590 LINK reactor_perf 00:02:57.590 CC examples/nvme/hotplug/hotplug.o 00:02:57.590 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:57.590 CXX test/cpp_headers/bdev_module.o 00:02:57.590 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:57.590 LINK arbitration 00:02:57.849 LINK vhost_fuzz 00:02:57.849 LINK hotplug 00:02:57.849 LINK cmb_copy 00:02:57.849 CXX test/cpp_headers/bdev_zone.o 00:02:57.849 LINK env_dpdk_post_init 00:02:58.107 CC test/event/app_repeat/app_repeat.o 00:02:58.107 CC examples/nvme/abort/abort.o 00:02:58.107 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:58.365 LINK app_repeat 00:02:58.365 CC app/vhost/vhost.o 00:02:58.365 CXX test/cpp_headers/bit_array.o 00:02:58.622 LINK spdk_top 00:02:58.622 CC test/env/memory/memory_ut.o 00:02:58.622 CC examples/bdev/hello_world/hello_bdev.o 00:02:58.622 LINK pmr_persistence 00:02:58.889 CXX test/cpp_headers/bit_pool.o 00:02:58.889 CC test/event/scheduler/scheduler.o 00:02:58.889 LINK vhost 00:02:59.146 LINK abort 00:02:59.146 CC test/env/pci/pci_ut.o 00:02:59.146 LINK hello_bdev 00:02:59.403 CXX test/cpp_headers/blob_bdev.o 00:02:59.403 LINK scheduler 00:02:59.660 CC test/lvol/esnap/esnap.o 00:02:59.660 CC test/app/histogram_perf/histogram_perf.o 00:02:59.918 LINK pci_ut 00:02:59.918 CC app/spdk_dd/spdk_dd.o 00:02:59.918 CXX test/cpp_headers/blobfs_bdev.o 00:02:59.918 LINK iscsi_fuzz 00:02:59.918 CC examples/bdev/bdevperf/bdevperf.o 00:03:00.175 LINK histogram_perf 00:03:00.175 CC test/app/jsoncat/jsoncat.o 00:03:00.175 CXX test/cpp_headers/blobfs.o 00:03:00.432 LINK jsoncat 00:03:00.432 LINK spdk_dd 00:03:00.432 LINK memory_ut 00:03:00.690 CXX test/cpp_headers/blob.o 00:03:00.690 CC test/app/stub/stub.o 00:03:00.690 CC app/fio/nvme/fio_plugin.o 00:03:00.948 CC test/nvme/aer/aer.o 00:03:00.948 LINK stub 00:03:00.948 CC test/nvme/reset/reset.o 00:03:00.948 CXX test/cpp_headers/conf.o 00:03:00.948 CC test/nvme/sgl/sgl.o 00:03:00.948 CC test/nvme/e2edp/nvme_dp.o 00:03:00.948 CXX test/cpp_headers/config.o 00:03:01.206 LINK bdevperf 00:03:01.206 CXX test/cpp_headers/cpuset.o 00:03:01.206 CC test/nvme/overhead/overhead.o 00:03:01.206 LINK aer 00:03:01.206 LINK reset 00:03:01.464 LINK nvme_dp 00:03:01.464 CXX test/cpp_headers/crc16.o 00:03:01.464 CC test/nvme/err_injection/err_injection.o 00:03:01.464 LINK sgl 00:03:01.722 CXX test/cpp_headers/crc32.o 00:03:01.722 CC test/rpc_client/rpc_client_test.o 00:03:01.722 LINK spdk_nvme 00:03:01.722 LINK overhead 00:03:01.722 CC test/nvme/startup/startup.o 00:03:01.722 LINK err_injection 00:03:01.722 CXX test/cpp_headers/crc64.o 00:03:01.989 LINK rpc_client_test 00:03:01.989 CC test/accel/dif/dif.o 00:03:01.989 CC app/fio/bdev/fio_plugin.o 00:03:01.989 LINK startup 00:03:01.989 CC examples/nvmf/nvmf/nvmf.o 00:03:01.989 CXX test/cpp_headers/dif.o 00:03:02.249 CXX test/cpp_headers/dma.o 00:03:02.249 CC test/nvme/reserve/reserve.o 00:03:02.249 CC test/nvme/simple_copy/simple_copy.o 00:03:02.507 CXX test/cpp_headers/endian.o 00:03:02.507 CXX test/cpp_headers/env_dpdk.o 00:03:02.507 LINK nvmf 00:03:02.507 CC test/nvme/connect_stress/connect_stress.o 00:03:02.507 LINK reserve 00:03:02.764 LINK simple_copy 00:03:02.764 CXX test/cpp_headers/env.o 00:03:02.764 CC test/nvme/boot_partition/boot_partition.o 00:03:02.764 LINK connect_stress 00:03:03.022 LINK dif 00:03:03.022 LINK spdk_bdev 00:03:03.022 CC test/nvme/compliance/nvme_compliance.o 00:03:03.022 CXX test/cpp_headers/event.o 00:03:03.022 CC test/nvme/fused_ordering/fused_ordering.o 00:03:03.022 LINK boot_partition 00:03:03.280 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:03.280 CXX test/cpp_headers/fd_group.o 00:03:03.280 CC test/nvme/fdp/fdp.o 00:03:03.280 CC test/nvme/cuse/cuse.o 00:03:03.280 CXX test/cpp_headers/fd.o 00:03:03.537 CXX test/cpp_headers/file.o 00:03:03.537 LINK fused_ordering 00:03:03.537 LINK doorbell_aers 00:03:03.537 CXX test/cpp_headers/ftl.o 00:03:03.537 LINK nvme_compliance 00:03:03.537 CXX test/cpp_headers/gpt_spec.o 00:03:03.795 CC test/bdev/bdevio/bdevio.o 00:03:03.795 CXX test/cpp_headers/hexlify.o 00:03:03.795 CXX test/cpp_headers/histogram_data.o 00:03:03.795 CXX test/cpp_headers/idxd.o 00:03:03.795 CXX test/cpp_headers/idxd_spec.o 00:03:03.795 LINK fdp 00:03:03.795 CXX test/cpp_headers/ioat.o 00:03:03.795 CXX test/cpp_headers/init.o 00:03:04.054 CXX test/cpp_headers/ioat_spec.o 00:03:04.054 CXX test/cpp_headers/iscsi_spec.o 00:03:04.054 CXX test/cpp_headers/json.o 00:03:04.054 CXX test/cpp_headers/jsonrpc.o 00:03:04.054 CXX test/cpp_headers/keyring.o 00:03:04.054 CXX test/cpp_headers/keyring_module.o 00:03:04.312 LINK bdevio 00:03:04.312 CXX test/cpp_headers/likely.o 00:03:04.312 CXX test/cpp_headers/log.o 00:03:04.312 CXX test/cpp_headers/lvol.o 00:03:04.312 CXX test/cpp_headers/memory.o 00:03:04.312 CXX test/cpp_headers/mmio.o 00:03:04.312 CXX test/cpp_headers/nbd.o 00:03:04.312 CXX test/cpp_headers/notify.o 00:03:04.312 CXX test/cpp_headers/nvme.o 00:03:04.312 CXX test/cpp_headers/nvme_intel.o 00:03:04.570 CXX test/cpp_headers/nvme_ocssd.o 00:03:04.570 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:04.570 CXX test/cpp_headers/nvme_spec.o 00:03:04.570 CXX test/cpp_headers/nvme_zns.o 00:03:04.570 CXX test/cpp_headers/nvmf_cmd.o 00:03:04.570 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:04.570 CXX test/cpp_headers/nvmf.o 00:03:04.570 CXX test/cpp_headers/nvmf_spec.o 00:03:04.829 CXX test/cpp_headers/nvmf_transport.o 00:03:04.829 CXX test/cpp_headers/opal.o 00:03:04.829 CXX test/cpp_headers/opal_spec.o 00:03:04.829 CXX test/cpp_headers/pci_ids.o 00:03:04.829 CXX test/cpp_headers/pipe.o 00:03:05.087 CXX test/cpp_headers/queue.o 00:03:05.087 CXX test/cpp_headers/reduce.o 00:03:05.087 CXX test/cpp_headers/rpc.o 00:03:05.087 CXX test/cpp_headers/scheduler.o 00:03:05.087 CXX test/cpp_headers/scsi.o 00:03:05.087 CXX test/cpp_headers/scsi_spec.o 00:03:05.087 CXX test/cpp_headers/sock.o 00:03:05.087 CXX test/cpp_headers/stdinc.o 00:03:05.087 CXX test/cpp_headers/string.o 00:03:05.345 CXX test/cpp_headers/thread.o 00:03:05.345 CXX test/cpp_headers/trace.o 00:03:05.345 CXX test/cpp_headers/trace_parser.o 00:03:05.345 CXX test/cpp_headers/tree.o 00:03:05.345 CXX test/cpp_headers/ublk.o 00:03:05.345 CXX test/cpp_headers/util.o 00:03:05.345 CXX test/cpp_headers/uuid.o 00:03:05.345 CXX test/cpp_headers/version.o 00:03:05.345 CXX test/cpp_headers/vfio_user_pci.o 00:03:05.602 CXX test/cpp_headers/vfio_user_spec.o 00:03:05.602 CXX test/cpp_headers/vhost.o 00:03:05.602 CXX test/cpp_headers/vmd.o 00:03:05.602 LINK cuse 00:03:05.602 CXX test/cpp_headers/xor.o 00:03:05.602 CXX test/cpp_headers/zipf.o 00:03:07.503 LINK esnap 00:03:07.760 00:03:07.760 real 1m25.079s 00:03:07.760 user 9m14.931s 00:03:07.760 sys 1m56.227s 00:03:07.760 20:19:29 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:07.760 ************************************ 00:03:07.760 END TEST make 00:03:07.760 ************************************ 00:03:07.760 20:19:29 make -- common/autotest_common.sh@10 -- $ set +x 00:03:08.018 20:19:29 -- common/autotest_common.sh@1142 -- $ return 0 00:03:08.018 20:19:29 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:08.018 20:19:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:08.018 20:19:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:08.018 20:19:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.018 20:19:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:08.018 20:19:29 -- pm/common@44 -- $ pid=5189 00:03:08.018 20:19:29 -- pm/common@50 -- $ kill -TERM 5189 00:03:08.018 20:19:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.018 20:19:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:08.018 20:19:29 -- pm/common@44 -- $ pid=5191 00:03:08.018 20:19:29 -- pm/common@50 -- $ kill -TERM 5191 00:03:08.018 20:19:29 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:08.018 20:19:29 -- nvmf/common.sh@7 -- # uname -s 00:03:08.018 20:19:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:08.018 20:19:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:08.018 20:19:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:08.018 20:19:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:08.018 20:19:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:08.018 20:19:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:08.018 20:19:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:08.018 20:19:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:08.018 20:19:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:08.018 20:19:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:08.018 20:19:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:03:08.018 20:19:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:03:08.018 20:19:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:08.018 20:19:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:08.018 20:19:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:08.018 20:19:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:08.018 20:19:29 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:08.018 20:19:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:08.018 20:19:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:08.018 20:19:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:08.018 20:19:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.018 20:19:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.018 20:19:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.018 20:19:29 -- paths/export.sh@5 -- # export PATH 00:03:08.018 20:19:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:08.018 20:19:29 -- nvmf/common.sh@47 -- # : 0 00:03:08.018 20:19:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:08.018 20:19:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:08.018 20:19:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:08.018 20:19:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:08.018 20:19:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:08.018 20:19:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:08.018 20:19:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:08.018 20:19:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:08.018 20:19:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:08.018 20:19:29 -- spdk/autotest.sh@32 -- # uname -s 00:03:08.018 20:19:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:08.018 20:19:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:08.018 20:19:29 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:08.018 20:19:29 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:08.018 20:19:29 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:08.018 20:19:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:08.018 20:19:29 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:08.018 20:19:29 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:08.018 20:19:29 -- spdk/autotest.sh@48 -- # udevadm_pid=54729 00:03:08.018 20:19:29 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:08.018 20:19:29 -- pm/common@17 -- # local monitor 00:03:08.018 20:19:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.018 20:19:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.018 20:19:29 -- pm/common@25 -- # sleep 1 00:03:08.018 20:19:29 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:08.018 20:19:29 -- pm/common@21 -- # date +%s 00:03:08.018 20:19:29 -- pm/common@21 -- # date +%s 00:03:08.019 20:19:29 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721074769 00:03:08.019 20:19:29 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721074769 00:03:08.019 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721074769_collect-cpu-load.pm.log 00:03:08.019 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721074769_collect-vmstat.pm.log 00:03:08.953 20:19:30 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:08.953 20:19:30 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:08.953 20:19:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:08.953 20:19:30 -- common/autotest_common.sh@10 -- # set +x 00:03:08.953 20:19:30 -- spdk/autotest.sh@59 -- # create_test_list 00:03:08.953 20:19:30 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:08.953 20:19:30 -- common/autotest_common.sh@10 -- # set +x 00:03:09.211 20:19:30 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:09.211 20:19:30 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:09.211 20:19:30 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:09.211 20:19:30 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:09.211 20:19:30 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:09.211 20:19:30 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:09.211 20:19:30 -- common/autotest_common.sh@1455 -- # uname 00:03:09.211 20:19:30 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:09.211 20:19:30 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:09.211 20:19:30 -- common/autotest_common.sh@1475 -- # uname 00:03:09.211 20:19:30 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:09.211 20:19:30 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:09.211 20:19:30 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:09.211 20:19:30 -- spdk/autotest.sh@72 -- # hash lcov 00:03:09.211 20:19:30 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:09.211 20:19:30 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:09.211 --rc lcov_branch_coverage=1 00:03:09.211 --rc lcov_function_coverage=1 00:03:09.211 --rc genhtml_branch_coverage=1 00:03:09.211 --rc genhtml_function_coverage=1 00:03:09.211 --rc genhtml_legend=1 00:03:09.211 --rc geninfo_all_blocks=1 00:03:09.211 ' 00:03:09.211 20:19:30 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:09.211 --rc lcov_branch_coverage=1 00:03:09.211 --rc lcov_function_coverage=1 00:03:09.211 --rc genhtml_branch_coverage=1 00:03:09.211 --rc genhtml_function_coverage=1 00:03:09.211 --rc genhtml_legend=1 00:03:09.211 --rc geninfo_all_blocks=1 00:03:09.211 ' 00:03:09.211 20:19:30 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:09.211 --rc lcov_branch_coverage=1 00:03:09.211 --rc lcov_function_coverage=1 00:03:09.211 --rc genhtml_branch_coverage=1 00:03:09.211 --rc genhtml_function_coverage=1 00:03:09.211 --rc genhtml_legend=1 00:03:09.211 --rc geninfo_all_blocks=1 00:03:09.211 --no-external' 00:03:09.211 20:19:30 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:09.211 --rc lcov_branch_coverage=1 00:03:09.211 --rc lcov_function_coverage=1 00:03:09.211 --rc genhtml_branch_coverage=1 00:03:09.211 --rc genhtml_function_coverage=1 00:03:09.211 --rc genhtml_legend=1 00:03:09.211 --rc geninfo_all_blocks=1 00:03:09.211 --no-external' 00:03:09.211 20:19:30 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:09.211 lcov: LCOV version 1.14 00:03:09.211 20:19:30 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:27.308 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:27.308 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:39.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:39.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:39.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:39.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:39.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:39.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:39.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:39.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:39.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:39.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:39.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:39.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:39.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:39.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:39.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:39.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:39.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:39.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:39.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:39.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:39.514 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:39.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:39.772 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:39.772 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:39.772 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:39.772 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:39.772 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:39.772 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:39.772 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:39.772 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:39.772 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:39.772 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:39.772 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:39.772 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:39.772 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:39.773 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:39.773 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:40.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:40.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:44.237 20:20:04 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:44.237 20:20:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:44.237 20:20:04 -- common/autotest_common.sh@10 -- # set +x 00:03:44.237 20:20:04 -- spdk/autotest.sh@91 -- # rm -f 00:03:44.237 20:20:04 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:44.237 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.237 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:44.237 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:44.237 20:20:05 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:44.237 20:20:05 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:44.237 20:20:05 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:44.237 20:20:05 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:44.237 20:20:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:44.237 20:20:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:44.237 20:20:05 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:44.237 20:20:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:44.237 20:20:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:44.237 20:20:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:44.237 20:20:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:44.237 20:20:05 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:44.237 20:20:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:44.237 20:20:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:44.237 20:20:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:44.237 20:20:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:44.237 20:20:05 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:44.237 20:20:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:44.237 20:20:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:44.237 20:20:05 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:44.237 20:20:05 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:44.237 20:20:05 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:44.237 20:20:05 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:44.237 20:20:05 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:44.237 20:20:05 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:44.237 20:20:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:44.237 20:20:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:44.237 20:20:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:44.237 20:20:05 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:44.237 20:20:05 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:44.237 No valid GPT data, bailing 00:03:44.237 20:20:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:44.496 20:20:05 -- scripts/common.sh@391 -- # pt= 00:03:44.496 20:20:05 -- scripts/common.sh@392 -- # return 1 00:03:44.496 20:20:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:44.496 1+0 records in 00:03:44.496 1+0 records out 00:03:44.496 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00365631 s, 287 MB/s 00:03:44.496 20:20:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:44.496 20:20:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:44.496 20:20:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:44.496 20:20:05 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:44.496 20:20:05 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:44.496 No valid GPT data, bailing 00:03:44.496 20:20:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:44.496 20:20:05 -- scripts/common.sh@391 -- # pt= 00:03:44.496 20:20:05 -- scripts/common.sh@392 -- # return 1 00:03:44.496 20:20:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:44.496 1+0 records in 00:03:44.496 1+0 records out 00:03:44.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00427338 s, 245 MB/s 00:03:44.497 20:20:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:44.497 20:20:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:44.497 20:20:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:44.497 20:20:05 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:44.497 20:20:05 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:44.497 No valid GPT data, bailing 00:03:44.497 20:20:05 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:44.497 20:20:05 -- scripts/common.sh@391 -- # pt= 00:03:44.497 20:20:05 -- scripts/common.sh@392 -- # return 1 00:03:44.497 20:20:05 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:44.497 1+0 records in 00:03:44.497 1+0 records out 00:03:44.497 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00429168 s, 244 MB/s 00:03:44.497 20:20:05 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:44.497 20:20:05 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:44.497 20:20:05 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:44.497 20:20:05 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:44.497 20:20:05 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:44.755 No valid GPT data, bailing 00:03:44.755 20:20:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:44.755 20:20:06 -- scripts/common.sh@391 -- # pt= 00:03:44.755 20:20:06 -- scripts/common.sh@392 -- # return 1 00:03:44.755 20:20:06 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:44.755 1+0 records in 00:03:44.755 1+0 records out 00:03:44.755 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436698 s, 240 MB/s 00:03:44.755 20:20:06 -- spdk/autotest.sh@118 -- # sync 00:03:44.755 20:20:06 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:44.755 20:20:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:44.755 20:20:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:46.653 20:20:07 -- spdk/autotest.sh@124 -- # uname -s 00:03:46.653 20:20:07 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:46.653 20:20:07 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:46.653 20:20:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.653 20:20:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.653 20:20:07 -- common/autotest_common.sh@10 -- # set +x 00:03:46.653 ************************************ 00:03:46.653 START TEST setup.sh 00:03:46.653 ************************************ 00:03:46.653 20:20:07 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:46.653 * Looking for test storage... 00:03:46.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:46.653 20:20:07 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:46.653 20:20:07 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:46.653 20:20:07 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:46.653 20:20:07 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.653 20:20:07 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.653 20:20:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:46.653 ************************************ 00:03:46.653 START TEST acl 00:03:46.653 ************************************ 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:46.653 * Looking for test storage... 00:03:46.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:46.653 20:20:07 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:46.653 20:20:07 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:46.653 20:20:07 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:46.653 20:20:07 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:46.653 20:20:07 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:46.653 20:20:07 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:46.653 20:20:07 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:46.653 20:20:07 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.653 20:20:07 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:47.216 20:20:08 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:47.216 20:20:08 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:47.216 20:20:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.216 20:20:08 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:47.216 20:20:08 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.216 20:20:08 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:47.780 20:20:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:47.780 20:20:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:47.780 20:20:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.780 Hugepages 00:03:47.780 node hugesize free / total 00:03:47.780 20:20:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:47.780 20:20:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:47.780 20:20:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.038 00:03:48.038 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:48.038 20:20:09 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:48.038 20:20:09 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.038 20:20:09 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.038 20:20:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:48.038 ************************************ 00:03:48.038 START TEST denied 00:03:48.038 ************************************ 00:03:48.038 20:20:09 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:48.038 20:20:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:48.038 20:20:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:48.038 20:20:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.038 20:20:09 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:48.038 20:20:09 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:48.969 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:48.969 20:20:10 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:48.969 20:20:10 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:48.969 20:20:10 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:48.969 20:20:10 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:48.969 20:20:10 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:48.969 20:20:10 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:48.969 20:20:10 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:48.969 20:20:10 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:48.969 20:20:10 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.969 20:20:10 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:49.534 00:03:49.534 real 0m1.391s 00:03:49.534 user 0m0.591s 00:03:49.534 sys 0m0.732s 00:03:49.534 20:20:10 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:49.534 20:20:10 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:49.534 ************************************ 00:03:49.534 END TEST denied 00:03:49.534 ************************************ 00:03:49.534 20:20:10 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:49.534 20:20:10 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:49.534 20:20:10 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.534 20:20:10 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.534 20:20:10 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:49.534 ************************************ 00:03:49.534 START TEST allowed 00:03:49.534 ************************************ 00:03:49.534 20:20:10 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:49.534 20:20:10 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:49.534 20:20:10 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:49.534 20:20:10 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.534 20:20:10 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:49.534 20:20:10 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:50.538 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:50.538 20:20:11 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:50.538 20:20:11 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:50.538 20:20:11 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:50.538 20:20:11 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:50.538 20:20:11 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:50.538 20:20:11 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:50.538 20:20:11 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:50.538 20:20:11 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:50.538 20:20:11 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.538 20:20:11 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:51.105 00:03:51.105 real 0m1.466s 00:03:51.105 user 0m0.662s 00:03:51.105 sys 0m0.811s 00:03:51.105 20:20:12 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.105 ************************************ 00:03:51.105 20:20:12 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:51.105 END TEST allowed 00:03:51.105 ************************************ 00:03:51.105 20:20:12 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:51.105 00:03:51.106 real 0m4.573s 00:03:51.106 user 0m2.044s 00:03:51.106 sys 0m2.475s 00:03:51.106 20:20:12 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.106 20:20:12 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:51.106 ************************************ 00:03:51.106 END TEST acl 00:03:51.106 ************************************ 00:03:51.106 20:20:12 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:51.106 20:20:12 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:51.106 20:20:12 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.106 20:20:12 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.106 20:20:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:51.106 ************************************ 00:03:51.106 START TEST hugepages 00:03:51.106 ************************************ 00:03:51.106 20:20:12 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:51.106 * Looking for test storage... 00:03:51.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5902964 kB' 'MemAvailable: 7413312 kB' 'Buffers: 2436 kB' 'Cached: 1721756 kB' 'SwapCached: 0 kB' 'Active: 476760 kB' 'Inactive: 1351504 kB' 'Active(anon): 114560 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351504 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 105664 kB' 'Mapped: 48580 kB' 'Shmem: 10488 kB' 'KReclaimable: 67168 kB' 'Slab: 141256 kB' 'SReclaimable: 67168 kB' 'SUnreclaim: 74088 kB' 'KernelStack: 6364 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 333044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.106 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.366 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:51.367 20:20:12 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:51.367 20:20:12 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.367 20:20:12 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.367 20:20:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:51.367 ************************************ 00:03:51.367 START TEST default_setup 00:03:51.367 ************************************ 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.367 20:20:12 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:51.933 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:51.933 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:51.933 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.195 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:52.195 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:52.195 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7985980 kB' 'MemAvailable: 9496152 kB' 'Buffers: 2436 kB' 'Cached: 1721744 kB' 'SwapCached: 0 kB' 'Active: 493872 kB' 'Inactive: 1351508 kB' 'Active(anon): 131672 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351508 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 122824 kB' 'Mapped: 48724 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140804 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73996 kB' 'KernelStack: 6352 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.196 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7985480 kB' 'MemAvailable: 9495660 kB' 'Buffers: 2436 kB' 'Cached: 1721744 kB' 'SwapCached: 0 kB' 'Active: 493528 kB' 'Inactive: 1351516 kB' 'Active(anon): 131328 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351516 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 122496 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140812 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 74004 kB' 'KernelStack: 6320 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.197 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.198 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7985480 kB' 'MemAvailable: 9495660 kB' 'Buffers: 2436 kB' 'Cached: 1721744 kB' 'SwapCached: 0 kB' 'Active: 493352 kB' 'Inactive: 1351516 kB' 'Active(anon): 131152 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351516 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122288 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140812 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 74004 kB' 'KernelStack: 6304 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.199 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.200 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:52.201 nr_hugepages=1024 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.201 resv_hugepages=0 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.201 surplus_hugepages=0 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.201 anon_hugepages=0 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7985480 kB' 'MemAvailable: 9495660 kB' 'Buffers: 2436 kB' 'Cached: 1721744 kB' 'SwapCached: 0 kB' 'Active: 493288 kB' 'Inactive: 1351516 kB' 'Active(anon): 131088 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351516 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122224 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140812 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 74004 kB' 'KernelStack: 6288 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.201 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.202 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7985480 kB' 'MemUsed: 4256492 kB' 'SwapCached: 0 kB' 'Active: 493300 kB' 'Inactive: 1351516 kB' 'Active(anon): 131100 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351516 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1724180 kB' 'Mapped: 48596 kB' 'AnonPages: 122248 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66808 kB' 'Slab: 140812 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 74004 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.203 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.204 node0=1024 expecting 1024 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:52.204 00:03:52.204 real 0m0.947s 00:03:52.204 user 0m0.443s 00:03:52.204 sys 0m0.467s 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.204 20:20:13 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:52.204 ************************************ 00:03:52.204 END TEST default_setup 00:03:52.204 ************************************ 00:03:52.204 20:20:13 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:52.204 20:20:13 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:52.204 20:20:13 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.204 20:20:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.204 20:20:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.204 ************************************ 00:03:52.204 START TEST per_node_1G_alloc 00:03:52.204 ************************************ 00:03:52.204 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:52.204 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:52.204 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:52.204 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:52.204 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:52.204 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:52.204 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:52.204 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:52.204 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.204 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:52.204 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:52.204 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:52.204 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.204 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:52.204 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:52.204 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.204 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.204 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:52.204 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:52.205 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:52.205 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:52.205 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:52.205 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:52.205 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:52.205 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.205 20:20:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:52.776 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:52.776 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:52.776 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:52.776 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:52.776 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:52.776 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:52.776 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.776 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.776 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:52.776 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:52.776 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:52.776 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.776 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.776 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.776 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.776 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.776 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.776 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.776 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.776 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.776 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.776 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.776 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9027944 kB' 'MemAvailable: 10538128 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493864 kB' 'Inactive: 1351520 kB' 'Active(anon): 131664 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122740 kB' 'Mapped: 48624 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140788 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73980 kB' 'KernelStack: 6324 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.777 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9027944 kB' 'MemAvailable: 10538128 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493384 kB' 'Inactive: 1351520 kB' 'Active(anon): 131184 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122296 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140788 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73980 kB' 'KernelStack: 6320 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.778 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.779 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9027944 kB' 'MemAvailable: 10538128 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493160 kB' 'Inactive: 1351520 kB' 'Active(anon): 130960 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122120 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140788 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73980 kB' 'KernelStack: 6336 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.780 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.781 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:52.782 nr_hugepages=512 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:52.782 resv_hugepages=0 00:03:52.782 surplus_hugepages=0 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.782 anon_hugepages=0 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9027944 kB' 'MemAvailable: 10538128 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493368 kB' 'Inactive: 1351520 kB' 'Active(anon): 131168 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122328 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140788 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73980 kB' 'KernelStack: 6320 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.782 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.783 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9027944 kB' 'MemUsed: 3214028 kB' 'SwapCached: 0 kB' 'Active: 493488 kB' 'Inactive: 1351520 kB' 'Active(anon): 131288 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1724184 kB' 'Mapped: 48596 kB' 'AnonPages: 122400 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66808 kB' 'Slab: 140788 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.784 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.785 node0=512 expecting 512 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:52.785 00:03:52.785 real 0m0.513s 00:03:52.785 user 0m0.245s 00:03:52.785 sys 0m0.299s 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.785 20:20:14 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:52.785 ************************************ 00:03:52.785 END TEST per_node_1G_alloc 00:03:52.785 ************************************ 00:03:52.785 20:20:14 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:52.785 20:20:14 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:52.785 20:20:14 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.785 20:20:14 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.785 20:20:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.785 ************************************ 00:03:52.785 START TEST even_2G_alloc 00:03:52.785 ************************************ 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.785 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:53.044 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.044 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:53.044 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:53.307 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:53.307 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:53.307 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:53.307 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:53.307 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:53.307 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:53.307 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:53.307 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7981800 kB' 'MemAvailable: 9491984 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493948 kB' 'Inactive: 1351520 kB' 'Active(anon): 131748 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122892 kB' 'Mapped: 48932 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140792 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73984 kB' 'KernelStack: 6340 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.308 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7981800 kB' 'MemAvailable: 9491984 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493580 kB' 'Inactive: 1351520 kB' 'Active(anon): 131380 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122488 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140796 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73988 kB' 'KernelStack: 6308 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.309 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.310 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7981576 kB' 'MemAvailable: 9491760 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493292 kB' 'Inactive: 1351520 kB' 'Active(anon): 131092 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122456 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 66808 kB' 'Slab: 140792 kB' 'SReclaimable: 66808 kB' 'SUnreclaim: 73984 kB' 'KernelStack: 6260 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.311 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.312 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:53.313 nr_hugepages=1024 00:03:53.313 resv_hugepages=0 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:53.313 surplus_hugepages=0 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:53.313 anon_hugepages=0 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.313 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7981576 kB' 'MemAvailable: 9491768 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493100 kB' 'Inactive: 1351520 kB' 'Active(anon): 130900 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122284 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 66824 kB' 'Slab: 140812 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 73988 kB' 'KernelStack: 6304 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.314 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.315 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7981576 kB' 'MemUsed: 4260396 kB' 'SwapCached: 0 kB' 'Active: 493156 kB' 'Inactive: 1351520 kB' 'Active(anon): 130956 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1724184 kB' 'Mapped: 48596 kB' 'AnonPages: 122368 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66824 kB' 'Slab: 140812 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 73988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.316 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.317 node0=1024 expecting 1024 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:53.317 00:03:53.317 real 0m0.493s 00:03:53.317 user 0m0.258s 00:03:53.317 sys 0m0.265s 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.317 20:20:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:53.317 ************************************ 00:03:53.317 END TEST even_2G_alloc 00:03:53.317 ************************************ 00:03:53.317 20:20:14 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:53.317 20:20:14 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:53.317 20:20:14 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.317 20:20:14 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.317 20:20:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:53.317 ************************************ 00:03:53.317 START TEST odd_alloc 00:03:53.317 ************************************ 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.317 20:20:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:53.888 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.888 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:53.888 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7981160 kB' 'MemAvailable: 9491352 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493808 kB' 'Inactive: 1351520 kB' 'Active(anon): 131608 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123028 kB' 'Mapped: 48708 kB' 'Shmem: 10464 kB' 'KReclaimable: 66824 kB' 'Slab: 140832 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 74008 kB' 'KernelStack: 6340 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.888 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.889 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7981436 kB' 'MemAvailable: 9491628 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493296 kB' 'Inactive: 1351520 kB' 'Active(anon): 131096 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122472 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 66824 kB' 'Slab: 140840 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 74016 kB' 'KernelStack: 6336 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.890 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.891 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7981688 kB' 'MemAvailable: 9491880 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493196 kB' 'Inactive: 1351520 kB' 'Active(anon): 130996 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122404 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 66824 kB' 'Slab: 140840 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 74016 kB' 'KernelStack: 6336 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.892 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:53.893 nr_hugepages=1025 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:53.893 resv_hugepages=0 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:53.893 surplus_hugepages=0 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:53.893 anon_hugepages=0 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.893 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7981688 kB' 'MemAvailable: 9491880 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493388 kB' 'Inactive: 1351520 kB' 'Active(anon): 131188 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122300 kB' 'Mapped: 48596 kB' 'Shmem: 10464 kB' 'KReclaimable: 66824 kB' 'Slab: 140840 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 74016 kB' 'KernelStack: 6320 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.894 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7982740 kB' 'MemUsed: 4259232 kB' 'SwapCached: 0 kB' 'Active: 493140 kB' 'Inactive: 1351516 kB' 'Active(anon): 130940 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351516 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 1724180 kB' 'Mapped: 48596 kB' 'AnonPages: 122296 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66824 kB' 'Slab: 140832 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 74008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.895 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.896 node0=1025 expecting 1025 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:53.896 00:03:53.896 real 0m0.538s 00:03:53.896 user 0m0.266s 00:03:53.896 sys 0m0.296s 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:53.896 20:20:15 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:53.896 ************************************ 00:03:53.896 END TEST odd_alloc 00:03:53.896 ************************************ 00:03:53.896 20:20:15 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:53.896 20:20:15 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:53.896 20:20:15 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:53.896 20:20:15 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.896 20:20:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:53.896 ************************************ 00:03:53.896 START TEST custom_alloc 00:03:53.896 ************************************ 00:03:53.896 20:20:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:53.896 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:53.896 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:53.896 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.897 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:54.468 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.468 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:54.468 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9032764 kB' 'MemAvailable: 10542956 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493716 kB' 'Inactive: 1351520 kB' 'Active(anon): 131516 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122728 kB' 'Mapped: 48716 kB' 'Shmem: 10464 kB' 'KReclaimable: 66824 kB' 'Slab: 140852 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 74028 kB' 'KernelStack: 6308 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.468 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9032764 kB' 'MemAvailable: 10542956 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493404 kB' 'Inactive: 1351520 kB' 'Active(anon): 131204 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122620 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66824 kB' 'Slab: 140836 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 74012 kB' 'KernelStack: 6308 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.469 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.470 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9032764 kB' 'MemAvailable: 10542956 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493192 kB' 'Inactive: 1351520 kB' 'Active(anon): 130992 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122404 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66824 kB' 'Slab: 140828 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 74004 kB' 'KernelStack: 6336 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.471 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.472 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:54.473 nr_hugepages=512 00:03:54.473 resv_hugepages=0 00:03:54.473 surplus_hugepages=0 00:03:54.473 anon_hugepages=0 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9033016 kB' 'MemAvailable: 10543208 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493152 kB' 'Inactive: 1351520 kB' 'Active(anon): 130952 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122388 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66824 kB' 'Slab: 140820 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 73996 kB' 'KernelStack: 6304 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.473 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.474 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9033016 kB' 'MemUsed: 3208956 kB' 'SwapCached: 0 kB' 'Active: 493416 kB' 'Inactive: 1351520 kB' 'Active(anon): 131216 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1724184 kB' 'Mapped: 48600 kB' 'AnonPages: 122380 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66824 kB' 'Slab: 140820 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 73996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.475 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:54.476 node0=512 expecting 512 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:54.476 00:03:54.476 real 0m0.563s 00:03:54.476 user 0m0.282s 00:03:54.476 sys 0m0.278s 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.476 20:20:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:54.476 ************************************ 00:03:54.476 END TEST custom_alloc 00:03:54.476 ************************************ 00:03:54.476 20:20:15 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:54.476 20:20:15 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:54.476 20:20:15 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.476 20:20:15 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.476 20:20:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.734 ************************************ 00:03:54.734 START TEST no_shrink_alloc 00:03:54.734 ************************************ 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.734 20:20:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:54.997 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.997 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:54.997 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7985020 kB' 'MemAvailable: 9495212 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493672 kB' 'Inactive: 1351520 kB' 'Active(anon): 131472 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 122628 kB' 'Mapped: 48732 kB' 'Shmem: 10464 kB' 'KReclaimable: 66824 kB' 'Slab: 140796 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 73972 kB' 'KernelStack: 6308 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.997 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7985844 kB' 'MemAvailable: 9496036 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493516 kB' 'Inactive: 1351520 kB' 'Active(anon): 131316 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122424 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66824 kB' 'Slab: 140792 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 73968 kB' 'KernelStack: 6304 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.998 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:54.999 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7985824 kB' 'MemAvailable: 9496016 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493516 kB' 'Inactive: 1351520 kB' 'Active(anon): 131316 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122436 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66824 kB' 'Slab: 140792 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 73968 kB' 'KernelStack: 6336 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.000 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.001 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.002 nr_hugepages=1024 00:03:55.002 resv_hugepages=0 00:03:55.002 surplus_hugepages=0 00:03:55.002 anon_hugepages=0 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7986084 kB' 'MemAvailable: 9496276 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493264 kB' 'Inactive: 1351520 kB' 'Active(anon): 131064 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122448 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66824 kB' 'Slab: 140792 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 73968 kB' 'KernelStack: 6336 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.002 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.003 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7986176 kB' 'MemUsed: 4255796 kB' 'SwapCached: 0 kB' 'Active: 493444 kB' 'Inactive: 1351520 kB' 'Active(anon): 131244 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1724184 kB' 'Mapped: 48600 kB' 'AnonPages: 122368 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66824 kB' 'Slab: 140788 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 73964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.004 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.264 node0=1024 expecting 1024 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.264 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:55.528 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.528 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.528 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.528 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7981420 kB' 'MemAvailable: 9491612 kB' 'Buffers: 2436 kB' 'Cached: 1721748 kB' 'SwapCached: 0 kB' 'Active: 493896 kB' 'Inactive: 1351520 kB' 'Active(anon): 131696 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351520 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122876 kB' 'Mapped: 48736 kB' 'Shmem: 10464 kB' 'KReclaimable: 66824 kB' 'Slab: 140748 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 73924 kB' 'KernelStack: 6372 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 349688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.528 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7981420 kB' 'MemAvailable: 9491616 kB' 'Buffers: 2436 kB' 'Cached: 1721752 kB' 'SwapCached: 0 kB' 'Active: 493512 kB' 'Inactive: 1351524 kB' 'Active(anon): 131312 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122456 kB' 'Mapped: 48876 kB' 'Shmem: 10464 kB' 'KReclaimable: 66824 kB' 'Slab: 140756 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 73932 kB' 'KernelStack: 6340 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.529 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.530 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7981672 kB' 'MemAvailable: 9491868 kB' 'Buffers: 2436 kB' 'Cached: 1721752 kB' 'SwapCached: 0 kB' 'Active: 493408 kB' 'Inactive: 1351524 kB' 'Active(anon): 131208 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122376 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66824 kB' 'Slab: 140776 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 73952 kB' 'KernelStack: 6336 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.531 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.532 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.533 nr_hugepages=1024 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:55.533 resv_hugepages=0 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.533 surplus_hugepages=0 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.533 anon_hugepages=0 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7981672 kB' 'MemAvailable: 9491868 kB' 'Buffers: 2436 kB' 'Cached: 1721752 kB' 'SwapCached: 0 kB' 'Active: 493180 kB' 'Inactive: 1351524 kB' 'Active(anon): 130980 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122380 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66824 kB' 'Slab: 140832 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 74008 kB' 'KernelStack: 6320 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 178028 kB' 'DirectMap2M: 4016128 kB' 'DirectMap1G: 10485760 kB' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.533 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.534 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7981672 kB' 'MemUsed: 4260300 kB' 'SwapCached: 0 kB' 'Active: 493172 kB' 'Inactive: 1351524 kB' 'Active(anon): 130972 kB' 'Inactive(anon): 0 kB' 'Active(file): 362200 kB' 'Inactive(file): 1351524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 1724188 kB' 'Mapped: 48600 kB' 'AnonPages: 122112 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66824 kB' 'Slab: 140832 kB' 'SReclaimable: 66824 kB' 'SUnreclaim: 74008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.535 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.536 node0=1024 expecting 1024 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:55.536 00:03:55.536 real 0m1.045s 00:03:55.536 user 0m0.519s 00:03:55.536 sys 0m0.564s 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.536 20:20:17 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:55.536 ************************************ 00:03:55.536 END TEST no_shrink_alloc 00:03:55.536 ************************************ 00:03:55.795 20:20:17 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:55.795 20:20:17 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:55.795 20:20:17 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:55.795 20:20:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:55.795 20:20:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.795 20:20:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.795 20:20:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:55.795 20:20:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:55.795 20:20:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:55.795 20:20:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:55.795 00:03:55.795 real 0m4.563s 00:03:55.795 user 0m2.177s 00:03:55.795 sys 0m2.419s 00:03:55.795 20:20:17 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.795 ************************************ 00:03:55.795 END TEST hugepages 00:03:55.795 ************************************ 00:03:55.795 20:20:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.795 20:20:17 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:55.795 20:20:17 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:55.795 20:20:17 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.795 20:20:17 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.795 20:20:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:55.795 ************************************ 00:03:55.795 START TEST driver 00:03:55.795 ************************************ 00:03:55.795 20:20:17 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:55.795 * Looking for test storage... 00:03:55.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:55.795 20:20:17 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:55.795 20:20:17 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.795 20:20:17 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:56.362 20:20:17 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:56.362 20:20:17 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.362 20:20:17 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.362 20:20:17 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:56.362 ************************************ 00:03:56.362 START TEST guess_driver 00:03:56.362 ************************************ 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:56.362 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:56.362 Looking for driver=uio_pci_generic 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.362 20:20:17 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:56.929 20:20:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:56.929 20:20:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:56.929 20:20:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.187 20:20:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.187 20:20:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:57.187 20:20:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.187 20:20:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.187 20:20:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:57.187 20:20:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.187 20:20:18 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:57.187 20:20:18 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:57.187 20:20:18 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:57.187 20:20:18 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:57.753 00:03:57.753 real 0m1.399s 00:03:57.753 user 0m0.562s 00:03:57.753 sys 0m0.851s 00:03:57.753 20:20:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.753 20:20:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:57.753 ************************************ 00:03:57.753 END TEST guess_driver 00:03:57.753 ************************************ 00:03:57.753 20:20:19 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:57.753 00:03:57.753 real 0m2.050s 00:03:57.753 user 0m0.778s 00:03:57.753 sys 0m1.339s 00:03:57.753 20:20:19 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.753 20:20:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:57.753 ************************************ 00:03:57.753 END TEST driver 00:03:57.753 ************************************ 00:03:57.753 20:20:19 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:57.753 20:20:19 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:57.753 20:20:19 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.753 20:20:19 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.753 20:20:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:57.753 ************************************ 00:03:57.753 START TEST devices 00:03:57.753 ************************************ 00:03:57.753 20:20:19 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:58.012 * Looking for test storage... 00:03:58.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:58.012 20:20:19 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:58.012 20:20:19 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:58.012 20:20:19 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:58.012 20:20:19 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:58.579 20:20:19 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:58.579 20:20:19 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:58.579 20:20:19 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:58.579 20:20:19 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:58.579 20:20:19 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:58.579 20:20:19 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:58.579 20:20:19 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:58.579 20:20:19 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:58.579 20:20:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:58.579 20:20:19 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:58.579 20:20:19 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:58.579 20:20:19 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:58.579 20:20:19 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:58.579 20:20:19 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:58.579 20:20:19 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:58.579 No valid GPT data, bailing 00:03:58.579 20:20:20 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:58.579 20:20:20 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:58.579 20:20:20 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:58.580 20:20:20 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:58.580 20:20:20 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:58.580 20:20:20 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:58.580 20:20:20 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:58.580 20:20:20 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:58.580 20:20:20 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:58.580 20:20:20 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:58.580 20:20:20 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:58.580 20:20:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:03:58.580 20:20:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:58.580 20:20:20 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:58.580 20:20:20 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:58.580 20:20:20 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:03:58.580 20:20:20 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:03:58.580 20:20:20 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:03:58.839 No valid GPT data, bailing 00:03:58.839 20:20:20 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:58.839 20:20:20 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:58.839 20:20:20 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:03:58.839 20:20:20 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:03:58.839 20:20:20 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:03:58.839 20:20:20 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:03:58.839 20:20:20 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:03:58.839 20:20:20 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:03:58.839 No valid GPT data, bailing 00:03:58.839 20:20:20 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:58.839 20:20:20 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:58.839 20:20:20 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:03:58.839 20:20:20 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:03:58.839 20:20:20 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:03:58.839 20:20:20 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:58.839 20:20:20 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:58.839 20:20:20 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:58.839 No valid GPT data, bailing 00:03:58.839 20:20:20 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:58.839 20:20:20 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:58.839 20:20:20 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:58.839 20:20:20 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:58.839 20:20:20 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:58.839 20:20:20 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:58.839 20:20:20 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:58.839 20:20:20 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.839 20:20:20 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.839 20:20:20 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:58.839 ************************************ 00:03:58.839 START TEST nvme_mount 00:03:58.839 ************************************ 00:03:58.839 20:20:20 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:58.839 20:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:58.839 20:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:58.839 20:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:58.839 20:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:58.839 20:20:20 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:58.839 20:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:58.839 20:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:58.839 20:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:58.839 20:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:58.839 20:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:58.839 20:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:58.839 20:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:58.839 20:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:58.839 20:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:58.839 20:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:58.840 20:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:58.840 20:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:58.840 20:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:58.840 20:20:20 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:00.226 Creating new GPT entries in memory. 00:04:00.226 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:00.226 other utilities. 00:04:00.226 20:20:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:00.226 20:20:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.226 20:20:21 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:00.226 20:20:21 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:00.226 20:20:21 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:01.162 Creating new GPT entries in memory. 00:04:01.162 The operation has completed successfully. 00:04:01.162 20:20:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:01.162 20:20:22 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:01.162 20:20:22 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58956 00:04:01.162 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.162 20:20:22 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:01.162 20:20:22 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.162 20:20:22 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:01.162 20:20:22 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:01.162 20:20:22 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.163 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:01.163 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:01.163 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:01.163 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.163 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:01.163 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:01.163 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.163 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:01.163 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:01.163 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.163 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:01.163 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:01.163 20:20:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.163 20:20:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:01.163 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:01.163 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:01.163 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:01.163 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.163 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:01.163 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.421 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:01.421 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.421 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:01.421 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.421 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.421 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:01.421 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.421 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.421 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:01.421 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:01.421 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.421 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.421 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:01.421 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:01.421 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:01.421 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:01.421 20:20:22 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:01.986 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:01.986 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:01.987 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:01.987 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:01.987 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.243 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:02.243 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.243 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:02.243 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.243 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:02.243 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:02.243 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:02.243 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:02.243 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:02.243 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:02.243 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:02.243 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:02.243 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:02.243 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:02.244 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:02.244 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:02.244 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:02.244 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:02.244 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.244 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:02.244 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:02.244 20:20:23 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.244 20:20:23 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:02.502 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:02.502 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:02.502 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:02.502 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.502 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:02.502 20:20:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.761 20:20:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:02.761 20:20:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.761 20:20:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:02.761 20:20:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.761 20:20:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:02.761 20:20:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:02.761 20:20:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:02.761 20:20:24 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:02.761 20:20:24 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:02.761 20:20:24 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:02.761 20:20:24 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:02.761 20:20:24 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:02.761 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:02.761 00:04:02.761 real 0m3.859s 00:04:02.761 user 0m0.653s 00:04:02.761 sys 0m0.955s 00:04:02.761 20:20:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.761 20:20:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:02.761 ************************************ 00:04:02.761 END TEST nvme_mount 00:04:02.761 ************************************ 00:04:02.761 20:20:24 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:02.761 20:20:24 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:02.761 20:20:24 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.761 20:20:24 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.761 20:20:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:02.761 ************************************ 00:04:02.761 START TEST dm_mount 00:04:02.761 ************************************ 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:02.761 20:20:24 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:04.136 Creating new GPT entries in memory. 00:04:04.136 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:04.136 other utilities. 00:04:04.136 20:20:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:04.136 20:20:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.136 20:20:25 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:04.136 20:20:25 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:04.136 20:20:25 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:05.072 Creating new GPT entries in memory. 00:04:05.072 The operation has completed successfully. 00:04:05.072 20:20:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:05.072 20:20:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.072 20:20:26 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:05.072 20:20:26 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:05.072 20:20:26 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:06.007 The operation has completed successfully. 00:04:06.007 20:20:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:06.007 20:20:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:06.007 20:20:27 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59389 00:04:06.007 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:06.007 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:06.007 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.008 20:20:27 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:06.285 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.285 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:06.285 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:06.285 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.285 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.285 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.285 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.285 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.543 20:20:27 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:06.802 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.802 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:06.802 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:06.802 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.802 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.802 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.802 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.802 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.802 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.802 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.061 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:07.061 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:07.061 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:07.061 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:07.061 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:07.061 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:07.061 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:07.061 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:07.061 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:07.061 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:07.061 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:07.061 20:20:28 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:07.061 00:04:07.061 real 0m4.148s 00:04:07.061 user 0m0.439s 00:04:07.061 sys 0m0.676s 00:04:07.061 20:20:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.061 20:20:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:07.061 ************************************ 00:04:07.061 END TEST dm_mount 00:04:07.061 ************************************ 00:04:07.061 20:20:28 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:07.061 20:20:28 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:07.061 20:20:28 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:07.061 20:20:28 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:07.061 20:20:28 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:07.061 20:20:28 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:07.061 20:20:28 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:07.061 20:20:28 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:07.319 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:07.320 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:07.320 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:07.320 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:07.320 20:20:28 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:07.320 20:20:28 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:07.320 20:20:28 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:07.320 20:20:28 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:07.320 20:20:28 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:07.320 20:20:28 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:07.320 20:20:28 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:07.320 00:04:07.320 real 0m9.512s 00:04:07.320 user 0m1.759s 00:04:07.320 sys 0m2.173s 00:04:07.320 20:20:28 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.320 ************************************ 00:04:07.320 END TEST devices 00:04:07.320 ************************************ 00:04:07.320 20:20:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:07.320 20:20:28 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:07.320 00:04:07.320 real 0m20.978s 00:04:07.320 user 0m6.846s 00:04:07.320 sys 0m8.591s 00:04:07.320 20:20:28 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.320 20:20:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:07.320 ************************************ 00:04:07.320 END TEST setup.sh 00:04:07.320 ************************************ 00:04:07.320 20:20:28 -- common/autotest_common.sh@1142 -- # return 0 00:04:07.320 20:20:28 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:07.887 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.145 Hugepages 00:04:08.145 node hugesize free / total 00:04:08.145 node0 1048576kB 0 / 0 00:04:08.145 node0 2048kB 2048 / 2048 00:04:08.145 00:04:08.145 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:08.145 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:08.145 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:08.145 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:08.145 20:20:29 -- spdk/autotest.sh@130 -- # uname -s 00:04:08.145 20:20:29 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:08.145 20:20:29 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:08.145 20:20:29 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:09.123 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.123 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.123 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:09.123 20:20:30 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:10.075 20:20:31 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:10.075 20:20:31 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:10.075 20:20:31 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:10.075 20:20:31 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:10.075 20:20:31 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:10.075 20:20:31 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:10.075 20:20:31 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:10.075 20:20:31 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:10.075 20:20:31 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:10.075 20:20:31 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:10.075 20:20:31 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:10.075 20:20:31 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:10.642 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.642 Waiting for block devices as requested 00:04:10.642 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:10.642 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:10.642 20:20:32 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:10.642 20:20:32 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:10.642 20:20:32 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:10.642 20:20:32 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:10.642 20:20:32 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:10.642 20:20:32 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:10.642 20:20:32 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:10.642 20:20:32 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:10.642 20:20:32 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:10.642 20:20:32 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:10.642 20:20:32 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:10.642 20:20:32 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:10.642 20:20:32 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:10.642 20:20:32 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:10.642 20:20:32 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:10.642 20:20:32 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:10.642 20:20:32 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:10.642 20:20:32 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:10.642 20:20:32 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:10.642 20:20:32 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:10.642 20:20:32 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:10.642 20:20:32 -- common/autotest_common.sh@1557 -- # continue 00:04:10.642 20:20:32 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:10.642 20:20:32 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:10.642 20:20:32 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:10.642 20:20:32 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:10.642 20:20:32 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:10.642 20:20:32 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:10.642 20:20:32 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:10.642 20:20:32 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:10.642 20:20:32 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:10.642 20:20:32 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:10.642 20:20:32 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:10.642 20:20:32 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:10.642 20:20:32 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:10.642 20:20:32 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:10.642 20:20:32 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:10.642 20:20:32 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:10.642 20:20:32 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:10.642 20:20:32 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:10.642 20:20:32 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:10.642 20:20:32 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:10.642 20:20:32 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:10.642 20:20:32 -- common/autotest_common.sh@1557 -- # continue 00:04:10.642 20:20:32 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:10.642 20:20:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:10.642 20:20:32 -- common/autotest_common.sh@10 -- # set +x 00:04:10.900 20:20:32 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:10.900 20:20:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:10.900 20:20:32 -- common/autotest_common.sh@10 -- # set +x 00:04:10.900 20:20:32 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.466 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.467 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:11.467 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:11.725 20:20:33 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:11.725 20:20:33 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:11.725 20:20:33 -- common/autotest_common.sh@10 -- # set +x 00:04:11.725 20:20:33 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:11.725 20:20:33 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:11.725 20:20:33 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:11.725 20:20:33 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:11.725 20:20:33 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:11.725 20:20:33 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:11.725 20:20:33 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:11.725 20:20:33 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:11.725 20:20:33 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:11.725 20:20:33 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:11.725 20:20:33 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:11.725 20:20:33 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:11.725 20:20:33 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:11.725 20:20:33 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:11.725 20:20:33 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:11.725 20:20:33 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:11.725 20:20:33 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:11.725 20:20:33 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:11.725 20:20:33 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:11.725 20:20:33 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:11.725 20:20:33 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:11.725 20:20:33 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:11.725 20:20:33 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:11.725 20:20:33 -- common/autotest_common.sh@1593 -- # return 0 00:04:11.725 20:20:33 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:11.725 20:20:33 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:11.725 20:20:33 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:11.725 20:20:33 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:11.725 20:20:33 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:11.725 20:20:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:11.725 20:20:33 -- common/autotest_common.sh@10 -- # set +x 00:04:11.725 20:20:33 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:11.725 20:20:33 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:11.725 20:20:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.725 20:20:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.725 20:20:33 -- common/autotest_common.sh@10 -- # set +x 00:04:11.725 ************************************ 00:04:11.725 START TEST env 00:04:11.725 ************************************ 00:04:11.725 20:20:33 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:11.725 * Looking for test storage... 00:04:11.725 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:11.725 20:20:33 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:11.725 20:20:33 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.725 20:20:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.725 20:20:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:11.725 ************************************ 00:04:11.725 START TEST env_memory 00:04:11.725 ************************************ 00:04:11.725 20:20:33 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:11.982 00:04:11.982 00:04:11.982 CUnit - A unit testing framework for C - Version 2.1-3 00:04:11.982 http://cunit.sourceforge.net/ 00:04:11.982 00:04:11.982 00:04:11.982 Suite: memory 00:04:11.982 Test: alloc and free memory map ...[2024-07-15 20:20:33.271786] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:11.982 passed 00:04:11.982 Test: mem map translation ...[2024-07-15 20:20:33.303197] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:11.982 [2024-07-15 20:20:33.303240] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:11.982 [2024-07-15 20:20:33.303296] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:11.982 [2024-07-15 20:20:33.303307] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:11.982 passed 00:04:11.982 Test: mem map registration ...[2024-07-15 20:20:33.367102] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:11.982 [2024-07-15 20:20:33.367139] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:11.982 passed 00:04:11.982 Test: mem map adjacent registrations ...passed 00:04:11.982 00:04:11.982 Run Summary: Type Total Ran Passed Failed Inactive 00:04:11.982 suites 1 1 n/a 0 0 00:04:11.982 tests 4 4 4 0 0 00:04:11.982 asserts 152 152 152 0 n/a 00:04:11.982 00:04:11.982 Elapsed time = 0.215 seconds 00:04:11.982 00:04:11.982 real 0m0.233s 00:04:11.982 user 0m0.218s 00:04:11.982 sys 0m0.011s 00:04:11.982 20:20:33 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.982 ************************************ 00:04:11.982 END TEST env_memory 00:04:11.982 20:20:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:11.982 ************************************ 00:04:12.240 20:20:33 env -- common/autotest_common.sh@1142 -- # return 0 00:04:12.240 20:20:33 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:12.240 20:20:33 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.240 20:20:33 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.240 20:20:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.240 ************************************ 00:04:12.240 START TEST env_vtophys 00:04:12.240 ************************************ 00:04:12.240 20:20:33 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:12.240 EAL: lib.eal log level changed from notice to debug 00:04:12.240 EAL: Detected lcore 0 as core 0 on socket 0 00:04:12.240 EAL: Detected lcore 1 as core 0 on socket 0 00:04:12.240 EAL: Detected lcore 2 as core 0 on socket 0 00:04:12.240 EAL: Detected lcore 3 as core 0 on socket 0 00:04:12.240 EAL: Detected lcore 4 as core 0 on socket 0 00:04:12.240 EAL: Detected lcore 5 as core 0 on socket 0 00:04:12.240 EAL: Detected lcore 6 as core 0 on socket 0 00:04:12.240 EAL: Detected lcore 7 as core 0 on socket 0 00:04:12.240 EAL: Detected lcore 8 as core 0 on socket 0 00:04:12.240 EAL: Detected lcore 9 as core 0 on socket 0 00:04:12.240 EAL: Maximum logical cores by configuration: 128 00:04:12.240 EAL: Detected CPU lcores: 10 00:04:12.240 EAL: Detected NUMA nodes: 1 00:04:12.240 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:12.240 EAL: Detected shared linkage of DPDK 00:04:12.240 EAL: No shared files mode enabled, IPC will be disabled 00:04:12.240 EAL: Selected IOVA mode 'PA' 00:04:12.240 EAL: Probing VFIO support... 00:04:12.240 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:12.240 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:12.240 EAL: Ask a virtual area of 0x2e000 bytes 00:04:12.240 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:12.240 EAL: Setting up physically contiguous memory... 00:04:12.240 EAL: Setting maximum number of open files to 524288 00:04:12.240 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:12.240 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:12.240 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.240 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:12.240 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:12.240 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.240 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:12.240 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:12.240 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.240 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:12.240 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:12.240 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.240 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:12.240 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:12.240 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.240 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:12.240 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:12.240 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.240 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:12.240 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:12.240 EAL: Ask a virtual area of 0x61000 bytes 00:04:12.240 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:12.240 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:12.240 EAL: Ask a virtual area of 0x400000000 bytes 00:04:12.240 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:12.240 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:12.240 EAL: Hugepages will be freed exactly as allocated. 00:04:12.240 EAL: No shared files mode enabled, IPC is disabled 00:04:12.240 EAL: No shared files mode enabled, IPC is disabled 00:04:12.240 EAL: TSC frequency is ~2200000 KHz 00:04:12.240 EAL: Main lcore 0 is ready (tid=7f46e3214a00;cpuset=[0]) 00:04:12.240 EAL: Trying to obtain current memory policy. 00:04:12.240 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.240 EAL: Restoring previous memory policy: 0 00:04:12.240 EAL: request: mp_malloc_sync 00:04:12.240 EAL: No shared files mode enabled, IPC is disabled 00:04:12.240 EAL: Heap on socket 0 was expanded by 2MB 00:04:12.240 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:12.240 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:12.240 EAL: Mem event callback 'spdk:(nil)' registered 00:04:12.240 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:12.240 00:04:12.240 00:04:12.240 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.240 http://cunit.sourceforge.net/ 00:04:12.240 00:04:12.240 00:04:12.240 Suite: components_suite 00:04:12.240 Test: vtophys_malloc_test ...passed 00:04:12.240 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:12.240 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.240 EAL: Restoring previous memory policy: 4 00:04:12.240 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.240 EAL: request: mp_malloc_sync 00:04:12.240 EAL: No shared files mode enabled, IPC is disabled 00:04:12.240 EAL: Heap on socket 0 was expanded by 4MB 00:04:12.240 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.240 EAL: request: mp_malloc_sync 00:04:12.240 EAL: No shared files mode enabled, IPC is disabled 00:04:12.240 EAL: Heap on socket 0 was shrunk by 4MB 00:04:12.240 EAL: Trying to obtain current memory policy. 00:04:12.240 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.240 EAL: Restoring previous memory policy: 4 00:04:12.240 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.240 EAL: request: mp_malloc_sync 00:04:12.240 EAL: No shared files mode enabled, IPC is disabled 00:04:12.240 EAL: Heap on socket 0 was expanded by 6MB 00:04:12.240 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.240 EAL: request: mp_malloc_sync 00:04:12.240 EAL: No shared files mode enabled, IPC is disabled 00:04:12.240 EAL: Heap on socket 0 was shrunk by 6MB 00:04:12.240 EAL: Trying to obtain current memory policy. 00:04:12.240 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.240 EAL: Restoring previous memory policy: 4 00:04:12.240 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.240 EAL: request: mp_malloc_sync 00:04:12.240 EAL: No shared files mode enabled, IPC is disabled 00:04:12.240 EAL: Heap on socket 0 was expanded by 10MB 00:04:12.240 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.240 EAL: request: mp_malloc_sync 00:04:12.240 EAL: No shared files mode enabled, IPC is disabled 00:04:12.240 EAL: Heap on socket 0 was shrunk by 10MB 00:04:12.240 EAL: Trying to obtain current memory policy. 00:04:12.240 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.240 EAL: Restoring previous memory policy: 4 00:04:12.240 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.240 EAL: request: mp_malloc_sync 00:04:12.240 EAL: No shared files mode enabled, IPC is disabled 00:04:12.240 EAL: Heap on socket 0 was expanded by 18MB 00:04:12.240 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.240 EAL: request: mp_malloc_sync 00:04:12.240 EAL: No shared files mode enabled, IPC is disabled 00:04:12.240 EAL: Heap on socket 0 was shrunk by 18MB 00:04:12.240 EAL: Trying to obtain current memory policy. 00:04:12.240 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.240 EAL: Restoring previous memory policy: 4 00:04:12.240 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.240 EAL: request: mp_malloc_sync 00:04:12.240 EAL: No shared files mode enabled, IPC is disabled 00:04:12.240 EAL: Heap on socket 0 was expanded by 34MB 00:04:12.240 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.240 EAL: request: mp_malloc_sync 00:04:12.240 EAL: No shared files mode enabled, IPC is disabled 00:04:12.240 EAL: Heap on socket 0 was shrunk by 34MB 00:04:12.240 EAL: Trying to obtain current memory policy. 00:04:12.240 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.240 EAL: Restoring previous memory policy: 4 00:04:12.240 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.240 EAL: request: mp_malloc_sync 00:04:12.240 EAL: No shared files mode enabled, IPC is disabled 00:04:12.240 EAL: Heap on socket 0 was expanded by 66MB 00:04:12.240 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.240 EAL: request: mp_malloc_sync 00:04:12.240 EAL: No shared files mode enabled, IPC is disabled 00:04:12.240 EAL: Heap on socket 0 was shrunk by 66MB 00:04:12.240 EAL: Trying to obtain current memory policy. 00:04:12.240 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.240 EAL: Restoring previous memory policy: 4 00:04:12.240 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.240 EAL: request: mp_malloc_sync 00:04:12.240 EAL: No shared files mode enabled, IPC is disabled 00:04:12.240 EAL: Heap on socket 0 was expanded by 130MB 00:04:12.240 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.499 EAL: request: mp_malloc_sync 00:04:12.499 EAL: No shared files mode enabled, IPC is disabled 00:04:12.499 EAL: Heap on socket 0 was shrunk by 130MB 00:04:12.499 EAL: Trying to obtain current memory policy. 00:04:12.499 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.499 EAL: Restoring previous memory policy: 4 00:04:12.499 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.499 EAL: request: mp_malloc_sync 00:04:12.499 EAL: No shared files mode enabled, IPC is disabled 00:04:12.499 EAL: Heap on socket 0 was expanded by 258MB 00:04:12.499 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.499 EAL: request: mp_malloc_sync 00:04:12.499 EAL: No shared files mode enabled, IPC is disabled 00:04:12.499 EAL: Heap on socket 0 was shrunk by 258MB 00:04:12.499 EAL: Trying to obtain current memory policy. 00:04:12.499 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.499 EAL: Restoring previous memory policy: 4 00:04:12.499 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.499 EAL: request: mp_malloc_sync 00:04:12.499 EAL: No shared files mode enabled, IPC is disabled 00:04:12.499 EAL: Heap on socket 0 was expanded by 514MB 00:04:12.499 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.756 EAL: request: mp_malloc_sync 00:04:12.756 EAL: No shared files mode enabled, IPC is disabled 00:04:12.756 EAL: Heap on socket 0 was shrunk by 514MB 00:04:12.756 EAL: Trying to obtain current memory policy. 00:04:12.756 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.756 EAL: Restoring previous memory policy: 4 00:04:12.756 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.756 EAL: request: mp_malloc_sync 00:04:12.756 EAL: No shared files mode enabled, IPC is disabled 00:04:12.756 EAL: Heap on socket 0 was expanded by 1026MB 00:04:13.014 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.014 passed 00:04:13.014 00:04:13.014 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.014 suites 1 1 n/a 0 0 00:04:13.014 tests 2 2 2 0 0 00:04:13.014 asserts 5288 5288 5288 0 n/a 00:04:13.014 00:04:13.014 Elapsed time = 0.741 seconds 00:04:13.014 EAL: request: mp_malloc_sync 00:04:13.014 EAL: No shared files mode enabled, IPC is disabled 00:04:13.014 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:13.014 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.014 EAL: request: mp_malloc_sync 00:04:13.014 EAL: No shared files mode enabled, IPC is disabled 00:04:13.014 EAL: Heap on socket 0 was shrunk by 2MB 00:04:13.014 EAL: No shared files mode enabled, IPC is disabled 00:04:13.014 EAL: No shared files mode enabled, IPC is disabled 00:04:13.014 EAL: No shared files mode enabled, IPC is disabled 00:04:13.014 ************************************ 00:04:13.014 END TEST env_vtophys 00:04:13.014 ************************************ 00:04:13.014 00:04:13.014 real 0m0.929s 00:04:13.014 user 0m0.481s 00:04:13.014 sys 0m0.318s 00:04:13.014 20:20:34 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.014 20:20:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:13.014 20:20:34 env -- common/autotest_common.sh@1142 -- # return 0 00:04:13.014 20:20:34 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:13.014 20:20:34 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.014 20:20:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.014 20:20:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.014 ************************************ 00:04:13.014 START TEST env_pci 00:04:13.014 ************************************ 00:04:13.014 20:20:34 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:13.014 00:04:13.014 00:04:13.014 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.014 http://cunit.sourceforge.net/ 00:04:13.014 00:04:13.014 00:04:13.014 Suite: pci 00:04:13.015 Test: pci_hook ...[2024-07-15 20:20:34.488679] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60572 has claimed it 00:04:13.015 passed 00:04:13.015 00:04:13.015 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.015 suites 1 1 n/a 0 0 00:04:13.015 tests 1 1 1 0 0 00:04:13.015 asserts 25 25 25 0 n/a 00:04:13.015 00:04:13.015 Elapsed time = 0.002 seconds 00:04:13.015 EAL: Cannot find device (10000:00:01.0) 00:04:13.015 EAL: Failed to attach device on primary process 00:04:13.015 ************************************ 00:04:13.015 END TEST env_pci 00:04:13.015 ************************************ 00:04:13.015 00:04:13.015 real 0m0.019s 00:04:13.015 user 0m0.010s 00:04:13.015 sys 0m0.008s 00:04:13.015 20:20:34 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.015 20:20:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:13.274 20:20:34 env -- common/autotest_common.sh@1142 -- # return 0 00:04:13.274 20:20:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:13.274 20:20:34 env -- env/env.sh@15 -- # uname 00:04:13.274 20:20:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:13.274 20:20:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:13.274 20:20:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:13.274 20:20:34 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:13.274 20:20:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.274 20:20:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.274 ************************************ 00:04:13.274 START TEST env_dpdk_post_init 00:04:13.274 ************************************ 00:04:13.274 20:20:34 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:13.274 EAL: Detected CPU lcores: 10 00:04:13.274 EAL: Detected NUMA nodes: 1 00:04:13.274 EAL: Detected shared linkage of DPDK 00:04:13.274 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:13.274 EAL: Selected IOVA mode 'PA' 00:04:13.274 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:13.274 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:13.274 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:13.274 Starting DPDK initialization... 00:04:13.274 Starting SPDK post initialization... 00:04:13.274 SPDK NVMe probe 00:04:13.274 Attaching to 0000:00:10.0 00:04:13.274 Attaching to 0000:00:11.0 00:04:13.274 Attached to 0000:00:10.0 00:04:13.274 Attached to 0000:00:11.0 00:04:13.274 Cleaning up... 00:04:13.274 00:04:13.274 real 0m0.187s 00:04:13.274 user 0m0.056s 00:04:13.274 sys 0m0.031s 00:04:13.274 ************************************ 00:04:13.274 END TEST env_dpdk_post_init 00:04:13.274 ************************************ 00:04:13.274 20:20:34 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.274 20:20:34 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:13.274 20:20:34 env -- common/autotest_common.sh@1142 -- # return 0 00:04:13.533 20:20:34 env -- env/env.sh@26 -- # uname 00:04:13.533 20:20:34 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:13.533 20:20:34 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:13.533 20:20:34 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.533 20:20:34 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.533 20:20:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.533 ************************************ 00:04:13.533 START TEST env_mem_callbacks 00:04:13.533 ************************************ 00:04:13.533 20:20:34 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:13.533 EAL: Detected CPU lcores: 10 00:04:13.533 EAL: Detected NUMA nodes: 1 00:04:13.533 EAL: Detected shared linkage of DPDK 00:04:13.533 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:13.533 EAL: Selected IOVA mode 'PA' 00:04:13.533 00:04:13.533 00:04:13.533 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.533 http://cunit.sourceforge.net/ 00:04:13.533 00:04:13.533 00:04:13.533 Suite: memory 00:04:13.533 Test: test ... 00:04:13.533 register 0x200000200000 2097152 00:04:13.533 malloc 3145728 00:04:13.533 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:13.533 register 0x200000400000 4194304 00:04:13.533 buf 0x200000500000 len 3145728 PASSED 00:04:13.533 malloc 64 00:04:13.533 buf 0x2000004fff40 len 64 PASSED 00:04:13.533 malloc 4194304 00:04:13.533 register 0x200000800000 6291456 00:04:13.533 buf 0x200000a00000 len 4194304 PASSED 00:04:13.533 free 0x200000500000 3145728 00:04:13.533 free 0x2000004fff40 64 00:04:13.533 unregister 0x200000400000 4194304 PASSED 00:04:13.533 free 0x200000a00000 4194304 00:04:13.533 unregister 0x200000800000 6291456 PASSED 00:04:13.533 malloc 8388608 00:04:13.533 register 0x200000400000 10485760 00:04:13.533 buf 0x200000600000 len 8388608 PASSED 00:04:13.533 free 0x200000600000 8388608 00:04:13.533 unregister 0x200000400000 10485760 PASSED 00:04:13.533 passed 00:04:13.533 00:04:13.533 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.533 suites 1 1 n/a 0 0 00:04:13.533 tests 1 1 1 0 0 00:04:13.533 asserts 15 15 15 0 n/a 00:04:13.533 00:04:13.533 Elapsed time = 0.008 seconds 00:04:13.533 00:04:13.533 real 0m0.156s 00:04:13.533 user 0m0.016s 00:04:13.533 sys 0m0.036s 00:04:13.533 20:20:34 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.533 20:20:34 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:13.533 ************************************ 00:04:13.533 END TEST env_mem_callbacks 00:04:13.533 ************************************ 00:04:13.533 20:20:34 env -- common/autotest_common.sh@1142 -- # return 0 00:04:13.533 ************************************ 00:04:13.533 END TEST env 00:04:13.533 ************************************ 00:04:13.533 00:04:13.533 real 0m1.850s 00:04:13.533 user 0m0.884s 00:04:13.533 sys 0m0.619s 00:04:13.533 20:20:34 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.533 20:20:34 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.533 20:20:35 -- common/autotest_common.sh@1142 -- # return 0 00:04:13.533 20:20:35 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:13.533 20:20:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.533 20:20:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.533 20:20:35 -- common/autotest_common.sh@10 -- # set +x 00:04:13.533 ************************************ 00:04:13.533 START TEST rpc 00:04:13.533 ************************************ 00:04:13.533 20:20:35 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:13.791 * Looking for test storage... 00:04:13.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:13.791 20:20:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60682 00:04:13.791 20:20:35 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:13.791 20:20:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.791 20:20:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60682 00:04:13.791 20:20:35 rpc -- common/autotest_common.sh@829 -- # '[' -z 60682 ']' 00:04:13.791 20:20:35 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.791 20:20:35 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:13.791 20:20:35 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.791 20:20:35 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:13.791 20:20:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.791 [2024-07-15 20:20:35.176036] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:13.791 [2024-07-15 20:20:35.176142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60682 ] 00:04:14.049 [2024-07-15 20:20:35.313253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.049 [2024-07-15 20:20:35.383480] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:14.049 [2024-07-15 20:20:35.383543] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60682' to capture a snapshot of events at runtime. 00:04:14.049 [2024-07-15 20:20:35.383559] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:14.049 [2024-07-15 20:20:35.383570] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:14.049 [2024-07-15 20:20:35.383579] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60682 for offline analysis/debug. 00:04:14.049 [2024-07-15 20:20:35.383615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.049 20:20:35 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:14.049 20:20:35 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:14.049 20:20:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:14.049 20:20:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:14.049 20:20:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:14.049 20:20:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:14.049 20:20:35 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.049 20:20:35 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.049 20:20:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.307 ************************************ 00:04:14.307 START TEST rpc_integrity 00:04:14.307 ************************************ 00:04:14.307 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:14.307 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:14.307 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.307 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.307 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.307 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:14.307 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:14.307 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:14.307 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:14.307 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.307 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.307 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.307 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:14.307 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:14.307 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.307 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.307 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.307 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:14.307 { 00:04:14.307 "aliases": [ 00:04:14.307 "90963930-f9c2-4f8c-88f7-70032d9df13a" 00:04:14.307 ], 00:04:14.307 "assigned_rate_limits": { 00:04:14.307 "r_mbytes_per_sec": 0, 00:04:14.307 "rw_ios_per_sec": 0, 00:04:14.307 "rw_mbytes_per_sec": 0, 00:04:14.307 "w_mbytes_per_sec": 0 00:04:14.307 }, 00:04:14.307 "block_size": 512, 00:04:14.307 "claimed": false, 00:04:14.307 "driver_specific": {}, 00:04:14.307 "memory_domains": [ 00:04:14.307 { 00:04:14.307 "dma_device_id": "system", 00:04:14.307 "dma_device_type": 1 00:04:14.307 }, 00:04:14.307 { 00:04:14.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.307 "dma_device_type": 2 00:04:14.307 } 00:04:14.307 ], 00:04:14.307 "name": "Malloc0", 00:04:14.307 "num_blocks": 16384, 00:04:14.307 "product_name": "Malloc disk", 00:04:14.307 "supported_io_types": { 00:04:14.307 "abort": true, 00:04:14.307 "compare": false, 00:04:14.307 "compare_and_write": false, 00:04:14.307 "copy": true, 00:04:14.307 "flush": true, 00:04:14.307 "get_zone_info": false, 00:04:14.307 "nvme_admin": false, 00:04:14.307 "nvme_io": false, 00:04:14.307 "nvme_io_md": false, 00:04:14.307 "nvme_iov_md": false, 00:04:14.307 "read": true, 00:04:14.307 "reset": true, 00:04:14.307 "seek_data": false, 00:04:14.307 "seek_hole": false, 00:04:14.307 "unmap": true, 00:04:14.307 "write": true, 00:04:14.307 "write_zeroes": true, 00:04:14.307 "zcopy": true, 00:04:14.307 "zone_append": false, 00:04:14.307 "zone_management": false 00:04:14.307 }, 00:04:14.307 "uuid": "90963930-f9c2-4f8c-88f7-70032d9df13a", 00:04:14.307 "zoned": false 00:04:14.307 } 00:04:14.307 ]' 00:04:14.307 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:14.307 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:14.307 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:14.307 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.307 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.307 [2024-07-15 20:20:35.712544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:14.308 [2024-07-15 20:20:35.712600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:14.308 [2024-07-15 20:20:35.712621] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7bfad0 00:04:14.308 [2024-07-15 20:20:35.712631] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:14.308 [2024-07-15 20:20:35.714184] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:14.308 [2024-07-15 20:20:35.714226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:14.308 Passthru0 00:04:14.308 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.308 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:14.308 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.308 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.308 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.308 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:14.308 { 00:04:14.308 "aliases": [ 00:04:14.308 "90963930-f9c2-4f8c-88f7-70032d9df13a" 00:04:14.308 ], 00:04:14.308 "assigned_rate_limits": { 00:04:14.308 "r_mbytes_per_sec": 0, 00:04:14.308 "rw_ios_per_sec": 0, 00:04:14.308 "rw_mbytes_per_sec": 0, 00:04:14.308 "w_mbytes_per_sec": 0 00:04:14.308 }, 00:04:14.308 "block_size": 512, 00:04:14.308 "claim_type": "exclusive_write", 00:04:14.308 "claimed": true, 00:04:14.308 "driver_specific": {}, 00:04:14.308 "memory_domains": [ 00:04:14.308 { 00:04:14.308 "dma_device_id": "system", 00:04:14.308 "dma_device_type": 1 00:04:14.308 }, 00:04:14.308 { 00:04:14.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.308 "dma_device_type": 2 00:04:14.308 } 00:04:14.308 ], 00:04:14.308 "name": "Malloc0", 00:04:14.308 "num_blocks": 16384, 00:04:14.308 "product_name": "Malloc disk", 00:04:14.308 "supported_io_types": { 00:04:14.308 "abort": true, 00:04:14.308 "compare": false, 00:04:14.308 "compare_and_write": false, 00:04:14.308 "copy": true, 00:04:14.308 "flush": true, 00:04:14.308 "get_zone_info": false, 00:04:14.308 "nvme_admin": false, 00:04:14.308 "nvme_io": false, 00:04:14.308 "nvme_io_md": false, 00:04:14.308 "nvme_iov_md": false, 00:04:14.308 "read": true, 00:04:14.308 "reset": true, 00:04:14.308 "seek_data": false, 00:04:14.308 "seek_hole": false, 00:04:14.308 "unmap": true, 00:04:14.308 "write": true, 00:04:14.308 "write_zeroes": true, 00:04:14.308 "zcopy": true, 00:04:14.308 "zone_append": false, 00:04:14.308 "zone_management": false 00:04:14.308 }, 00:04:14.308 "uuid": "90963930-f9c2-4f8c-88f7-70032d9df13a", 00:04:14.308 "zoned": false 00:04:14.308 }, 00:04:14.308 { 00:04:14.308 "aliases": [ 00:04:14.308 "9256e74d-7472-566b-b67a-1de6d2faa640" 00:04:14.308 ], 00:04:14.308 "assigned_rate_limits": { 00:04:14.308 "r_mbytes_per_sec": 0, 00:04:14.308 "rw_ios_per_sec": 0, 00:04:14.308 "rw_mbytes_per_sec": 0, 00:04:14.308 "w_mbytes_per_sec": 0 00:04:14.308 }, 00:04:14.308 "block_size": 512, 00:04:14.308 "claimed": false, 00:04:14.308 "driver_specific": { 00:04:14.308 "passthru": { 00:04:14.308 "base_bdev_name": "Malloc0", 00:04:14.308 "name": "Passthru0" 00:04:14.308 } 00:04:14.308 }, 00:04:14.308 "memory_domains": [ 00:04:14.308 { 00:04:14.308 "dma_device_id": "system", 00:04:14.308 "dma_device_type": 1 00:04:14.308 }, 00:04:14.308 { 00:04:14.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.308 "dma_device_type": 2 00:04:14.308 } 00:04:14.308 ], 00:04:14.308 "name": "Passthru0", 00:04:14.308 "num_blocks": 16384, 00:04:14.308 "product_name": "passthru", 00:04:14.308 "supported_io_types": { 00:04:14.308 "abort": true, 00:04:14.308 "compare": false, 00:04:14.308 "compare_and_write": false, 00:04:14.308 "copy": true, 00:04:14.308 "flush": true, 00:04:14.308 "get_zone_info": false, 00:04:14.308 "nvme_admin": false, 00:04:14.308 "nvme_io": false, 00:04:14.308 "nvme_io_md": false, 00:04:14.308 "nvme_iov_md": false, 00:04:14.308 "read": true, 00:04:14.308 "reset": true, 00:04:14.308 "seek_data": false, 00:04:14.308 "seek_hole": false, 00:04:14.308 "unmap": true, 00:04:14.308 "write": true, 00:04:14.308 "write_zeroes": true, 00:04:14.308 "zcopy": true, 00:04:14.308 "zone_append": false, 00:04:14.308 "zone_management": false 00:04:14.308 }, 00:04:14.308 "uuid": "9256e74d-7472-566b-b67a-1de6d2faa640", 00:04:14.308 "zoned": false 00:04:14.308 } 00:04:14.308 ]' 00:04:14.308 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:14.308 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:14.308 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:14.308 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.308 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.308 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.308 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:14.308 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.308 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.566 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.566 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:14.566 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.566 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.566 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.566 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:14.566 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:14.566 20:20:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:14.566 00:04:14.566 real 0m0.328s 00:04:14.566 user 0m0.224s 00:04:14.566 sys 0m0.036s 00:04:14.566 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.566 20:20:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.566 ************************************ 00:04:14.566 END TEST rpc_integrity 00:04:14.566 ************************************ 00:04:14.566 20:20:35 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:14.566 20:20:35 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:14.566 20:20:35 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.566 20:20:35 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.566 20:20:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.566 ************************************ 00:04:14.566 START TEST rpc_plugins 00:04:14.566 ************************************ 00:04:14.566 20:20:35 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:14.566 20:20:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:14.566 20:20:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.566 20:20:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.566 20:20:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.566 20:20:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:14.566 20:20:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:14.566 20:20:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.566 20:20:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.566 20:20:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.566 20:20:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:14.566 { 00:04:14.566 "aliases": [ 00:04:14.566 "9b1d4e1b-f562-495f-9bbb-1aaffe932e9e" 00:04:14.566 ], 00:04:14.566 "assigned_rate_limits": { 00:04:14.566 "r_mbytes_per_sec": 0, 00:04:14.566 "rw_ios_per_sec": 0, 00:04:14.566 "rw_mbytes_per_sec": 0, 00:04:14.566 "w_mbytes_per_sec": 0 00:04:14.566 }, 00:04:14.566 "block_size": 4096, 00:04:14.566 "claimed": false, 00:04:14.566 "driver_specific": {}, 00:04:14.567 "memory_domains": [ 00:04:14.567 { 00:04:14.567 "dma_device_id": "system", 00:04:14.567 "dma_device_type": 1 00:04:14.567 }, 00:04:14.567 { 00:04:14.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.567 "dma_device_type": 2 00:04:14.567 } 00:04:14.567 ], 00:04:14.567 "name": "Malloc1", 00:04:14.567 "num_blocks": 256, 00:04:14.567 "product_name": "Malloc disk", 00:04:14.567 "supported_io_types": { 00:04:14.567 "abort": true, 00:04:14.567 "compare": false, 00:04:14.567 "compare_and_write": false, 00:04:14.567 "copy": true, 00:04:14.567 "flush": true, 00:04:14.567 "get_zone_info": false, 00:04:14.567 "nvme_admin": false, 00:04:14.567 "nvme_io": false, 00:04:14.567 "nvme_io_md": false, 00:04:14.567 "nvme_iov_md": false, 00:04:14.567 "read": true, 00:04:14.567 "reset": true, 00:04:14.567 "seek_data": false, 00:04:14.567 "seek_hole": false, 00:04:14.567 "unmap": true, 00:04:14.567 "write": true, 00:04:14.567 "write_zeroes": true, 00:04:14.567 "zcopy": true, 00:04:14.567 "zone_append": false, 00:04:14.567 "zone_management": false 00:04:14.567 }, 00:04:14.567 "uuid": "9b1d4e1b-f562-495f-9bbb-1aaffe932e9e", 00:04:14.567 "zoned": false 00:04:14.567 } 00:04:14.567 ]' 00:04:14.567 20:20:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:14.567 20:20:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:14.567 20:20:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:14.567 20:20:36 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.567 20:20:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.567 20:20:36 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.567 20:20:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:14.567 20:20:36 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.567 20:20:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.567 20:20:36 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.567 20:20:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:14.567 20:20:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:14.825 20:20:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:14.825 00:04:14.825 real 0m0.151s 00:04:14.825 user 0m0.089s 00:04:14.825 sys 0m0.023s 00:04:14.825 20:20:36 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.825 20:20:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.825 ************************************ 00:04:14.825 END TEST rpc_plugins 00:04:14.825 ************************************ 00:04:14.825 20:20:36 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:14.825 20:20:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:14.825 20:20:36 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.825 20:20:36 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.825 20:20:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.825 ************************************ 00:04:14.825 START TEST rpc_trace_cmd_test 00:04:14.825 ************************************ 00:04:14.825 20:20:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:14.825 20:20:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:14.825 20:20:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:14.825 20:20:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:14.825 20:20:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.825 20:20:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:14.825 20:20:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:14.825 "bdev": { 00:04:14.825 "mask": "0x8", 00:04:14.825 "tpoint_mask": "0xffffffffffffffff" 00:04:14.825 }, 00:04:14.825 "bdev_nvme": { 00:04:14.825 "mask": "0x4000", 00:04:14.825 "tpoint_mask": "0x0" 00:04:14.825 }, 00:04:14.825 "blobfs": { 00:04:14.825 "mask": "0x80", 00:04:14.825 "tpoint_mask": "0x0" 00:04:14.825 }, 00:04:14.825 "dsa": { 00:04:14.825 "mask": "0x200", 00:04:14.825 "tpoint_mask": "0x0" 00:04:14.825 }, 00:04:14.825 "ftl": { 00:04:14.825 "mask": "0x40", 00:04:14.825 "tpoint_mask": "0x0" 00:04:14.825 }, 00:04:14.825 "iaa": { 00:04:14.825 "mask": "0x1000", 00:04:14.825 "tpoint_mask": "0x0" 00:04:14.825 }, 00:04:14.825 "iscsi_conn": { 00:04:14.825 "mask": "0x2", 00:04:14.825 "tpoint_mask": "0x0" 00:04:14.825 }, 00:04:14.825 "nvme_pcie": { 00:04:14.825 "mask": "0x800", 00:04:14.825 "tpoint_mask": "0x0" 00:04:14.825 }, 00:04:14.825 "nvme_tcp": { 00:04:14.825 "mask": "0x2000", 00:04:14.825 "tpoint_mask": "0x0" 00:04:14.825 }, 00:04:14.825 "nvmf_rdma": { 00:04:14.825 "mask": "0x10", 00:04:14.825 "tpoint_mask": "0x0" 00:04:14.825 }, 00:04:14.825 "nvmf_tcp": { 00:04:14.825 "mask": "0x20", 00:04:14.825 "tpoint_mask": "0x0" 00:04:14.825 }, 00:04:14.825 "scsi": { 00:04:14.825 "mask": "0x4", 00:04:14.825 "tpoint_mask": "0x0" 00:04:14.825 }, 00:04:14.825 "sock": { 00:04:14.825 "mask": "0x8000", 00:04:14.825 "tpoint_mask": "0x0" 00:04:14.825 }, 00:04:14.825 "thread": { 00:04:14.825 "mask": "0x400", 00:04:14.825 "tpoint_mask": "0x0" 00:04:14.825 }, 00:04:14.825 "tpoint_group_mask": "0x8", 00:04:14.825 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60682" 00:04:14.825 }' 00:04:14.825 20:20:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:14.825 20:20:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:14.825 20:20:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:14.825 20:20:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:14.825 20:20:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:14.825 20:20:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:14.825 20:20:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:14.825 20:20:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:14.825 20:20:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:15.084 20:20:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:15.084 00:04:15.084 real 0m0.242s 00:04:15.084 user 0m0.213s 00:04:15.084 sys 0m0.021s 00:04:15.084 20:20:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.084 ************************************ 00:04:15.084 20:20:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:15.084 END TEST rpc_trace_cmd_test 00:04:15.084 ************************************ 00:04:15.084 20:20:36 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:15.084 20:20:36 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:15.084 20:20:36 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:15.084 20:20:36 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.084 20:20:36 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.084 20:20:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.084 ************************************ 00:04:15.084 START TEST go_rpc 00:04:15.084 ************************************ 00:04:15.084 20:20:36 rpc.go_rpc -- common/autotest_common.sh@1123 -- # go_rpc 00:04:15.084 20:20:36 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:15.084 20:20:36 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:15.084 20:20:36 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:04:15.084 20:20:36 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:15.084 20:20:36 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:15.084 20:20:36 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.084 20:20:36 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.084 20:20:36 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.084 20:20:36 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:15.084 20:20:36 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:15.084 20:20:36 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["131c9cf2-4efa-4b08-9b6d-bbe4cf154000"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"131c9cf2-4efa-4b08-9b6d-bbe4cf154000","zoned":false}]' 00:04:15.084 20:20:36 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:04:15.084 20:20:36 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:15.084 20:20:36 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:15.084 20:20:36 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.084 20:20:36 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.084 20:20:36 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.084 20:20:36 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:15.343 20:20:36 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:15.343 20:20:36 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:04:15.343 20:20:36 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:15.343 00:04:15.343 real 0m0.228s 00:04:15.343 user 0m0.158s 00:04:15.343 sys 0m0.037s 00:04:15.343 20:20:36 rpc.go_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.343 20:20:36 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.343 ************************************ 00:04:15.343 END TEST go_rpc 00:04:15.343 ************************************ 00:04:15.343 20:20:36 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:15.343 20:20:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:15.343 20:20:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:15.343 20:20:36 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.343 20:20:36 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.343 20:20:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.343 ************************************ 00:04:15.343 START TEST rpc_daemon_integrity 00:04:15.343 ************************************ 00:04:15.343 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:15.343 20:20:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:15.343 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.343 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.343 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.343 20:20:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:15.343 20:20:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:15.343 20:20:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:15.343 20:20:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:15.343 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.343 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.343 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.343 20:20:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:15.343 20:20:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:15.343 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.343 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.343 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.343 20:20:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:15.343 { 00:04:15.343 "aliases": [ 00:04:15.343 "c7298fc4-43f9-4c4a-b8d5-81ed5ccb292b" 00:04:15.343 ], 00:04:15.343 "assigned_rate_limits": { 00:04:15.343 "r_mbytes_per_sec": 0, 00:04:15.343 "rw_ios_per_sec": 0, 00:04:15.343 "rw_mbytes_per_sec": 0, 00:04:15.343 "w_mbytes_per_sec": 0 00:04:15.343 }, 00:04:15.343 "block_size": 512, 00:04:15.343 "claimed": false, 00:04:15.343 "driver_specific": {}, 00:04:15.343 "memory_domains": [ 00:04:15.343 { 00:04:15.343 "dma_device_id": "system", 00:04:15.343 "dma_device_type": 1 00:04:15.343 }, 00:04:15.343 { 00:04:15.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.343 "dma_device_type": 2 00:04:15.343 } 00:04:15.343 ], 00:04:15.343 "name": "Malloc3", 00:04:15.343 "num_blocks": 16384, 00:04:15.344 "product_name": "Malloc disk", 00:04:15.344 "supported_io_types": { 00:04:15.344 "abort": true, 00:04:15.344 "compare": false, 00:04:15.344 "compare_and_write": false, 00:04:15.344 "copy": true, 00:04:15.344 "flush": true, 00:04:15.344 "get_zone_info": false, 00:04:15.344 "nvme_admin": false, 00:04:15.344 "nvme_io": false, 00:04:15.344 "nvme_io_md": false, 00:04:15.344 "nvme_iov_md": false, 00:04:15.344 "read": true, 00:04:15.344 "reset": true, 00:04:15.344 "seek_data": false, 00:04:15.344 "seek_hole": false, 00:04:15.344 "unmap": true, 00:04:15.344 "write": true, 00:04:15.344 "write_zeroes": true, 00:04:15.344 "zcopy": true, 00:04:15.344 "zone_append": false, 00:04:15.344 "zone_management": false 00:04:15.344 }, 00:04:15.344 "uuid": "c7298fc4-43f9-4c4a-b8d5-81ed5ccb292b", 00:04:15.344 "zoned": false 00:04:15.344 } 00:04:15.344 ]' 00:04:15.344 20:20:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.603 [2024-07-15 20:20:36.856942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:15.603 [2024-07-15 20:20:36.857003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.603 [2024-07-15 20:20:36.857022] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9b6d70 00:04:15.603 [2024-07-15 20:20:36.857033] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.603 [2024-07-15 20:20:36.858457] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.603 [2024-07-15 20:20:36.858523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:15.603 Passthru0 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:15.603 { 00:04:15.603 "aliases": [ 00:04:15.603 "c7298fc4-43f9-4c4a-b8d5-81ed5ccb292b" 00:04:15.603 ], 00:04:15.603 "assigned_rate_limits": { 00:04:15.603 "r_mbytes_per_sec": 0, 00:04:15.603 "rw_ios_per_sec": 0, 00:04:15.603 "rw_mbytes_per_sec": 0, 00:04:15.603 "w_mbytes_per_sec": 0 00:04:15.603 }, 00:04:15.603 "block_size": 512, 00:04:15.603 "claim_type": "exclusive_write", 00:04:15.603 "claimed": true, 00:04:15.603 "driver_specific": {}, 00:04:15.603 "memory_domains": [ 00:04:15.603 { 00:04:15.603 "dma_device_id": "system", 00:04:15.603 "dma_device_type": 1 00:04:15.603 }, 00:04:15.603 { 00:04:15.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.603 "dma_device_type": 2 00:04:15.603 } 00:04:15.603 ], 00:04:15.603 "name": "Malloc3", 00:04:15.603 "num_blocks": 16384, 00:04:15.603 "product_name": "Malloc disk", 00:04:15.603 "supported_io_types": { 00:04:15.603 "abort": true, 00:04:15.603 "compare": false, 00:04:15.603 "compare_and_write": false, 00:04:15.603 "copy": true, 00:04:15.603 "flush": true, 00:04:15.603 "get_zone_info": false, 00:04:15.603 "nvme_admin": false, 00:04:15.603 "nvme_io": false, 00:04:15.603 "nvme_io_md": false, 00:04:15.603 "nvme_iov_md": false, 00:04:15.603 "read": true, 00:04:15.603 "reset": true, 00:04:15.603 "seek_data": false, 00:04:15.603 "seek_hole": false, 00:04:15.603 "unmap": true, 00:04:15.603 "write": true, 00:04:15.603 "write_zeroes": true, 00:04:15.603 "zcopy": true, 00:04:15.603 "zone_append": false, 00:04:15.603 "zone_management": false 00:04:15.603 }, 00:04:15.603 "uuid": "c7298fc4-43f9-4c4a-b8d5-81ed5ccb292b", 00:04:15.603 "zoned": false 00:04:15.603 }, 00:04:15.603 { 00:04:15.603 "aliases": [ 00:04:15.603 "f248984a-629d-5390-8868-c433e23ddbea" 00:04:15.603 ], 00:04:15.603 "assigned_rate_limits": { 00:04:15.603 "r_mbytes_per_sec": 0, 00:04:15.603 "rw_ios_per_sec": 0, 00:04:15.603 "rw_mbytes_per_sec": 0, 00:04:15.603 "w_mbytes_per_sec": 0 00:04:15.603 }, 00:04:15.603 "block_size": 512, 00:04:15.603 "claimed": false, 00:04:15.603 "driver_specific": { 00:04:15.603 "passthru": { 00:04:15.603 "base_bdev_name": "Malloc3", 00:04:15.603 "name": "Passthru0" 00:04:15.603 } 00:04:15.603 }, 00:04:15.603 "memory_domains": [ 00:04:15.603 { 00:04:15.603 "dma_device_id": "system", 00:04:15.603 "dma_device_type": 1 00:04:15.603 }, 00:04:15.603 { 00:04:15.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.603 "dma_device_type": 2 00:04:15.603 } 00:04:15.603 ], 00:04:15.603 "name": "Passthru0", 00:04:15.603 "num_blocks": 16384, 00:04:15.603 "product_name": "passthru", 00:04:15.603 "supported_io_types": { 00:04:15.603 "abort": true, 00:04:15.603 "compare": false, 00:04:15.603 "compare_and_write": false, 00:04:15.603 "copy": true, 00:04:15.603 "flush": true, 00:04:15.603 "get_zone_info": false, 00:04:15.603 "nvme_admin": false, 00:04:15.603 "nvme_io": false, 00:04:15.603 "nvme_io_md": false, 00:04:15.603 "nvme_iov_md": false, 00:04:15.603 "read": true, 00:04:15.603 "reset": true, 00:04:15.603 "seek_data": false, 00:04:15.603 "seek_hole": false, 00:04:15.603 "unmap": true, 00:04:15.603 "write": true, 00:04:15.603 "write_zeroes": true, 00:04:15.603 "zcopy": true, 00:04:15.603 "zone_append": false, 00:04:15.603 "zone_management": false 00:04:15.603 }, 00:04:15.603 "uuid": "f248984a-629d-5390-8868-c433e23ddbea", 00:04:15.603 "zoned": false 00:04:15.603 } 00:04:15.603 ]' 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:15.603 20:20:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:15.603 20:20:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:15.603 00:04:15.603 real 0m0.332s 00:04:15.603 user 0m0.215s 00:04:15.603 sys 0m0.046s 00:04:15.603 ************************************ 00:04:15.603 END TEST rpc_daemon_integrity 00:04:15.604 20:20:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.604 20:20:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.604 ************************************ 00:04:15.604 20:20:37 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:15.604 20:20:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:15.604 20:20:37 rpc -- rpc/rpc.sh@84 -- # killprocess 60682 00:04:15.604 20:20:37 rpc -- common/autotest_common.sh@948 -- # '[' -z 60682 ']' 00:04:15.604 20:20:37 rpc -- common/autotest_common.sh@952 -- # kill -0 60682 00:04:15.604 20:20:37 rpc -- common/autotest_common.sh@953 -- # uname 00:04:15.604 20:20:37 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:15.604 20:20:37 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60682 00:04:15.604 20:20:37 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:15.604 20:20:37 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:15.604 killing process with pid 60682 00:04:15.604 20:20:37 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60682' 00:04:15.604 20:20:37 rpc -- common/autotest_common.sh@967 -- # kill 60682 00:04:15.604 20:20:37 rpc -- common/autotest_common.sh@972 -- # wait 60682 00:04:15.862 00:04:15.862 real 0m2.316s 00:04:15.862 user 0m3.225s 00:04:15.862 sys 0m0.582s 00:04:15.862 20:20:37 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.862 ************************************ 00:04:15.862 END TEST rpc 00:04:15.862 ************************************ 00:04:15.862 20:20:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.137 20:20:37 -- common/autotest_common.sh@1142 -- # return 0 00:04:16.137 20:20:37 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:16.137 20:20:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.137 20:20:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.137 20:20:37 -- common/autotest_common.sh@10 -- # set +x 00:04:16.137 ************************************ 00:04:16.137 START TEST skip_rpc 00:04:16.137 ************************************ 00:04:16.137 20:20:37 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:16.137 * Looking for test storage... 00:04:16.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.137 20:20:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:16.137 20:20:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:16.137 20:20:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:16.137 20:20:37 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.137 20:20:37 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.137 20:20:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.137 ************************************ 00:04:16.137 START TEST skip_rpc 00:04:16.137 ************************************ 00:04:16.137 20:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:16.137 20:20:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60924 00:04:16.137 20:20:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:16.137 20:20:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.137 20:20:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:16.137 [2024-07-15 20:20:37.565401] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:16.137 [2024-07-15 20:20:37.565503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60924 ] 00:04:16.434 [2024-07-15 20:20:37.708904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.434 [2024-07-15 20:20:37.788440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.700 20:20:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.701 2024/07/15 20:20:42 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60924 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 60924 ']' 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 60924 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60924 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:21.701 killing process with pid 60924 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60924' 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 60924 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 60924 00:04:21.701 00:04:21.701 real 0m5.279s 00:04:21.701 user 0m4.988s 00:04:21.701 sys 0m0.183s 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.701 ************************************ 00:04:21.701 20:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.701 END TEST skip_rpc 00:04:21.701 ************************************ 00:04:21.701 20:20:42 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:21.701 20:20:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:21.701 20:20:42 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.701 20:20:42 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.701 20:20:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.701 ************************************ 00:04:21.701 START TEST skip_rpc_with_json 00:04:21.701 ************************************ 00:04:21.701 20:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:21.701 20:20:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:21.701 20:20:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=61022 00:04:21.701 20:20:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.701 20:20:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 61022 00:04:21.701 20:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 61022 ']' 00:04:21.701 20:20:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:21.701 20:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.701 20:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:21.701 20:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.701 20:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:21.701 20:20:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.701 [2024-07-15 20:20:42.889125] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:21.701 [2024-07-15 20:20:42.889233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61022 ] 00:04:21.701 [2024-07-15 20:20:43.027286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.701 [2024-07-15 20:20:43.096211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.633 20:20:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:22.633 20:20:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:22.633 20:20:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:22.633 20:20:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.633 20:20:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.634 [2024-07-15 20:20:43.951629] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:22.634 2024/07/15 20:20:43 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:04:22.634 request: 00:04:22.634 { 00:04:22.634 "method": "nvmf_get_transports", 00:04:22.634 "params": { 00:04:22.634 "trtype": "tcp" 00:04:22.634 } 00:04:22.634 } 00:04:22.634 Got JSON-RPC error response 00:04:22.634 GoRPCClient: error on JSON-RPC call 00:04:22.634 20:20:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:22.634 20:20:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:22.634 20:20:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.634 20:20:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.634 [2024-07-15 20:20:43.959727] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:22.634 20:20:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.634 20:20:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:22.634 20:20:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:22.634 20:20:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.634 20:20:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:22.634 20:20:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:22.634 { 00:04:22.634 "subsystems": [ 00:04:22.634 { 00:04:22.634 "subsystem": "keyring", 00:04:22.634 "config": [] 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "subsystem": "iobuf", 00:04:22.634 "config": [ 00:04:22.634 { 00:04:22.634 "method": "iobuf_set_options", 00:04:22.634 "params": { 00:04:22.634 "large_bufsize": 135168, 00:04:22.634 "large_pool_count": 1024, 00:04:22.634 "small_bufsize": 8192, 00:04:22.634 "small_pool_count": 8192 00:04:22.634 } 00:04:22.634 } 00:04:22.634 ] 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "subsystem": "sock", 00:04:22.634 "config": [ 00:04:22.634 { 00:04:22.634 "method": "sock_set_default_impl", 00:04:22.634 "params": { 00:04:22.634 "impl_name": "posix" 00:04:22.634 } 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "method": "sock_impl_set_options", 00:04:22.634 "params": { 00:04:22.634 "enable_ktls": false, 00:04:22.634 "enable_placement_id": 0, 00:04:22.634 "enable_quickack": false, 00:04:22.634 "enable_recv_pipe": true, 00:04:22.634 "enable_zerocopy_send_client": false, 00:04:22.634 "enable_zerocopy_send_server": true, 00:04:22.634 "impl_name": "ssl", 00:04:22.634 "recv_buf_size": 4096, 00:04:22.634 "send_buf_size": 4096, 00:04:22.634 "tls_version": 0, 00:04:22.634 "zerocopy_threshold": 0 00:04:22.634 } 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "method": "sock_impl_set_options", 00:04:22.634 "params": { 00:04:22.634 "enable_ktls": false, 00:04:22.634 "enable_placement_id": 0, 00:04:22.634 "enable_quickack": false, 00:04:22.634 "enable_recv_pipe": true, 00:04:22.634 "enable_zerocopy_send_client": false, 00:04:22.634 "enable_zerocopy_send_server": true, 00:04:22.634 "impl_name": "posix", 00:04:22.634 "recv_buf_size": 2097152, 00:04:22.634 "send_buf_size": 2097152, 00:04:22.634 "tls_version": 0, 00:04:22.634 "zerocopy_threshold": 0 00:04:22.634 } 00:04:22.634 } 00:04:22.634 ] 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "subsystem": "vmd", 00:04:22.634 "config": [] 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "subsystem": "accel", 00:04:22.634 "config": [ 00:04:22.634 { 00:04:22.634 "method": "accel_set_options", 00:04:22.634 "params": { 00:04:22.634 "buf_count": 2048, 00:04:22.634 "large_cache_size": 16, 00:04:22.634 "sequence_count": 2048, 00:04:22.634 "small_cache_size": 128, 00:04:22.634 "task_count": 2048 00:04:22.634 } 00:04:22.634 } 00:04:22.634 ] 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "subsystem": "bdev", 00:04:22.634 "config": [ 00:04:22.634 { 00:04:22.634 "method": "bdev_set_options", 00:04:22.634 "params": { 00:04:22.634 "bdev_auto_examine": true, 00:04:22.634 "bdev_io_cache_size": 256, 00:04:22.634 "bdev_io_pool_size": 65535, 00:04:22.634 "iobuf_large_cache_size": 16, 00:04:22.634 "iobuf_small_cache_size": 128 00:04:22.634 } 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "method": "bdev_raid_set_options", 00:04:22.634 "params": { 00:04:22.634 "process_window_size_kb": 1024 00:04:22.634 } 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "method": "bdev_iscsi_set_options", 00:04:22.634 "params": { 00:04:22.634 "timeout_sec": 30 00:04:22.634 } 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "method": "bdev_nvme_set_options", 00:04:22.634 "params": { 00:04:22.634 "action_on_timeout": "none", 00:04:22.634 "allow_accel_sequence": false, 00:04:22.634 "arbitration_burst": 0, 00:04:22.634 "bdev_retry_count": 3, 00:04:22.634 "ctrlr_loss_timeout_sec": 0, 00:04:22.634 "delay_cmd_submit": true, 00:04:22.634 "dhchap_dhgroups": [ 00:04:22.634 "null", 00:04:22.634 "ffdhe2048", 00:04:22.634 "ffdhe3072", 00:04:22.634 "ffdhe4096", 00:04:22.634 "ffdhe6144", 00:04:22.634 "ffdhe8192" 00:04:22.634 ], 00:04:22.634 "dhchap_digests": [ 00:04:22.634 "sha256", 00:04:22.634 "sha384", 00:04:22.634 "sha512" 00:04:22.634 ], 00:04:22.634 "disable_auto_failback": false, 00:04:22.634 "fast_io_fail_timeout_sec": 0, 00:04:22.634 "generate_uuids": false, 00:04:22.634 "high_priority_weight": 0, 00:04:22.634 "io_path_stat": false, 00:04:22.634 "io_queue_requests": 0, 00:04:22.634 "keep_alive_timeout_ms": 10000, 00:04:22.634 "low_priority_weight": 0, 00:04:22.634 "medium_priority_weight": 0, 00:04:22.634 "nvme_adminq_poll_period_us": 10000, 00:04:22.634 "nvme_error_stat": false, 00:04:22.634 "nvme_ioq_poll_period_us": 0, 00:04:22.634 "rdma_cm_event_timeout_ms": 0, 00:04:22.634 "rdma_max_cq_size": 0, 00:04:22.634 "rdma_srq_size": 0, 00:04:22.634 "reconnect_delay_sec": 0, 00:04:22.634 "timeout_admin_us": 0, 00:04:22.634 "timeout_us": 0, 00:04:22.634 "transport_ack_timeout": 0, 00:04:22.634 "transport_retry_count": 4, 00:04:22.634 "transport_tos": 0 00:04:22.634 } 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "method": "bdev_nvme_set_hotplug", 00:04:22.634 "params": { 00:04:22.634 "enable": false, 00:04:22.634 "period_us": 100000 00:04:22.634 } 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "method": "bdev_wait_for_examine" 00:04:22.634 } 00:04:22.634 ] 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "subsystem": "scsi", 00:04:22.634 "config": null 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "subsystem": "scheduler", 00:04:22.634 "config": [ 00:04:22.634 { 00:04:22.634 "method": "framework_set_scheduler", 00:04:22.634 "params": { 00:04:22.634 "name": "static" 00:04:22.634 } 00:04:22.634 } 00:04:22.634 ] 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "subsystem": "vhost_scsi", 00:04:22.634 "config": [] 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "subsystem": "vhost_blk", 00:04:22.634 "config": [] 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "subsystem": "ublk", 00:04:22.634 "config": [] 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "subsystem": "nbd", 00:04:22.634 "config": [] 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "subsystem": "nvmf", 00:04:22.634 "config": [ 00:04:22.634 { 00:04:22.634 "method": "nvmf_set_config", 00:04:22.634 "params": { 00:04:22.634 "admin_cmd_passthru": { 00:04:22.634 "identify_ctrlr": false 00:04:22.634 }, 00:04:22.634 "discovery_filter": "match_any" 00:04:22.634 } 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "method": "nvmf_set_max_subsystems", 00:04:22.634 "params": { 00:04:22.634 "max_subsystems": 1024 00:04:22.634 } 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "method": "nvmf_set_crdt", 00:04:22.634 "params": { 00:04:22.634 "crdt1": 0, 00:04:22.634 "crdt2": 0, 00:04:22.634 "crdt3": 0 00:04:22.634 } 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "method": "nvmf_create_transport", 00:04:22.634 "params": { 00:04:22.634 "abort_timeout_sec": 1, 00:04:22.634 "ack_timeout": 0, 00:04:22.634 "buf_cache_size": 4294967295, 00:04:22.634 "c2h_success": true, 00:04:22.634 "data_wr_pool_size": 0, 00:04:22.634 "dif_insert_or_strip": false, 00:04:22.634 "in_capsule_data_size": 4096, 00:04:22.634 "io_unit_size": 131072, 00:04:22.634 "max_aq_depth": 128, 00:04:22.634 "max_io_qpairs_per_ctrlr": 127, 00:04:22.634 "max_io_size": 131072, 00:04:22.634 "max_queue_depth": 128, 00:04:22.634 "num_shared_buffers": 511, 00:04:22.634 "sock_priority": 0, 00:04:22.634 "trtype": "TCP", 00:04:22.634 "zcopy": false 00:04:22.634 } 00:04:22.634 } 00:04:22.634 ] 00:04:22.634 }, 00:04:22.634 { 00:04:22.634 "subsystem": "iscsi", 00:04:22.634 "config": [ 00:04:22.634 { 00:04:22.634 "method": "iscsi_set_options", 00:04:22.634 "params": { 00:04:22.634 "allow_duplicated_isid": false, 00:04:22.634 "chap_group": 0, 00:04:22.634 "data_out_pool_size": 2048, 00:04:22.634 "default_time2retain": 20, 00:04:22.634 "default_time2wait": 2, 00:04:22.634 "disable_chap": false, 00:04:22.634 "error_recovery_level": 0, 00:04:22.634 "first_burst_length": 8192, 00:04:22.634 "immediate_data": true, 00:04:22.634 "immediate_data_pool_size": 16384, 00:04:22.634 "max_connections_per_session": 2, 00:04:22.634 "max_large_datain_per_connection": 64, 00:04:22.634 "max_queue_depth": 64, 00:04:22.634 "max_r2t_per_connection": 4, 00:04:22.635 "max_sessions": 128, 00:04:22.635 "mutual_chap": false, 00:04:22.635 "node_base": "iqn.2016-06.io.spdk", 00:04:22.635 "nop_in_interval": 30, 00:04:22.635 "nop_timeout": 60, 00:04:22.635 "pdu_pool_size": 36864, 00:04:22.635 "require_chap": false 00:04:22.635 } 00:04:22.635 } 00:04:22.635 ] 00:04:22.635 } 00:04:22.635 ] 00:04:22.635 } 00:04:22.635 20:20:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:22.635 20:20:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 61022 00:04:22.635 20:20:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61022 ']' 00:04:22.635 20:20:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61022 00:04:22.635 20:20:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:22.893 20:20:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:22.893 20:20:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61022 00:04:22.893 20:20:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:22.893 20:20:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:22.893 20:20:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61022' 00:04:22.893 killing process with pid 61022 00:04:22.893 20:20:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61022 00:04:22.893 20:20:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61022 00:04:23.151 20:20:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=61056 00:04:23.151 20:20:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:23.151 20:20:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 61056 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 61056 ']' 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 61056 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61056 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61056' 00:04:28.416 killing process with pid 61056 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 61056 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 61056 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:28.416 00:04:28.416 real 0m6.878s 00:04:28.416 user 0m6.886s 00:04:28.416 sys 0m0.469s 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.416 ************************************ 00:04:28.416 END TEST skip_rpc_with_json 00:04:28.416 ************************************ 00:04:28.416 20:20:49 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:28.416 20:20:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:28.416 20:20:49 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.416 20:20:49 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.416 20:20:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.416 ************************************ 00:04:28.416 START TEST skip_rpc_with_delay 00:04:28.416 ************************************ 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:28.416 [2024-07-15 20:20:49.805158] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:28.416 [2024-07-15 20:20:49.805291] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:28.416 00:04:28.416 real 0m0.085s 00:04:28.416 user 0m0.066s 00:04:28.416 sys 0m0.018s 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.416 20:20:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:28.416 ************************************ 00:04:28.416 END TEST skip_rpc_with_delay 00:04:28.416 ************************************ 00:04:28.416 20:20:49 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:28.416 20:20:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:28.416 20:20:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:28.416 20:20:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:28.416 20:20:49 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:28.416 20:20:49 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.416 20:20:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.416 ************************************ 00:04:28.416 START TEST exit_on_failed_rpc_init 00:04:28.416 ************************************ 00:04:28.416 20:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:28.416 20:20:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61165 00:04:28.416 20:20:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 61165 00:04:28.416 20:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 61165 ']' 00:04:28.416 20:20:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.416 20:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.416 20:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:28.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.416 20:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.416 20:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:28.416 20:20:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:28.674 [2024-07-15 20:20:49.942274] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:28.674 [2024-07-15 20:20:49.942406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61165 ] 00:04:28.674 [2024-07-15 20:20:50.081247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.947 [2024-07-15 20:20:50.176533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.512 20:20:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:29.512 20:20:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:29.512 20:20:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.512 20:20:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.512 20:20:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:29.512 20:20:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.512 20:20:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:29.512 20:20:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:29.512 20:20:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:29.512 20:20:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:29.512 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:29.512 20:20:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:29.512 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:29.512 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:29.512 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.770 [2024-07-15 20:20:51.080217] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:29.770 [2024-07-15 20:20:51.080319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61196 ] 00:04:29.770 [2024-07-15 20:20:51.219764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.027 [2024-07-15 20:20:51.293454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.027 [2024-07-15 20:20:51.293549] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:30.027 [2024-07-15 20:20:51.293566] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:30.027 [2024-07-15 20:20:51.293577] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:30.027 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:30.027 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:30.027 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:30.027 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:30.027 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:30.027 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:30.027 20:20:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:30.027 20:20:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 61165 00:04:30.027 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 61165 ']' 00:04:30.027 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 61165 00:04:30.027 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:30.027 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:30.027 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61165 00:04:30.027 killing process with pid 61165 00:04:30.027 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:30.027 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:30.027 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61165' 00:04:30.027 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 61165 00:04:30.027 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 61165 00:04:30.284 00:04:30.284 real 0m1.786s 00:04:30.284 user 0m2.265s 00:04:30.284 sys 0m0.324s 00:04:30.284 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.284 ************************************ 00:04:30.284 20:20:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:30.284 END TEST exit_on_failed_rpc_init 00:04:30.284 ************************************ 00:04:30.284 20:20:51 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:30.284 20:20:51 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:30.284 ************************************ 00:04:30.284 END TEST skip_rpc 00:04:30.284 ************************************ 00:04:30.284 00:04:30.284 real 0m14.312s 00:04:30.284 user 0m14.298s 00:04:30.284 sys 0m1.175s 00:04:30.284 20:20:51 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.284 20:20:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.284 20:20:51 -- common/autotest_common.sh@1142 -- # return 0 00:04:30.284 20:20:51 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:30.284 20:20:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.284 20:20:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.284 20:20:51 -- common/autotest_common.sh@10 -- # set +x 00:04:30.284 ************************************ 00:04:30.284 START TEST rpc_client 00:04:30.284 ************************************ 00:04:30.284 20:20:51 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:30.542 * Looking for test storage... 00:04:30.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:30.542 20:20:51 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:30.542 OK 00:04:30.542 20:20:51 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:30.542 ************************************ 00:04:30.542 END TEST rpc_client 00:04:30.542 ************************************ 00:04:30.542 00:04:30.542 real 0m0.100s 00:04:30.542 user 0m0.043s 00:04:30.542 sys 0m0.064s 00:04:30.542 20:20:51 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.542 20:20:51 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:30.542 20:20:51 -- common/autotest_common.sh@1142 -- # return 0 00:04:30.542 20:20:51 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:30.542 20:20:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.542 20:20:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.542 20:20:51 -- common/autotest_common.sh@10 -- # set +x 00:04:30.542 ************************************ 00:04:30.542 START TEST json_config 00:04:30.542 ************************************ 00:04:30.542 20:20:51 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:30.542 20:20:51 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:30.542 20:20:51 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:30.542 20:20:51 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:30.542 20:20:51 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.542 20:20:51 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.542 20:20:51 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.542 20:20:51 json_config -- paths/export.sh@5 -- # export PATH 00:04:30.542 20:20:51 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@47 -- # : 0 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:30.542 20:20:51 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:30.542 INFO: JSON configuration test init 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:30.542 20:20:51 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:30.542 20:20:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:30.542 20:20:51 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:30.542 20:20:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.542 Waiting for target to run... 00:04:30.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.542 20:20:51 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:30.542 20:20:51 json_config -- json_config/common.sh@9 -- # local app=target 00:04:30.542 20:20:51 json_config -- json_config/common.sh@10 -- # shift 00:04:30.542 20:20:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.542 20:20:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.542 20:20:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.542 20:20:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.542 20:20:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.543 20:20:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61314 00:04:30.543 20:20:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.543 20:20:51 json_config -- json_config/common.sh@25 -- # waitforlisten 61314 /var/tmp/spdk_tgt.sock 00:04:30.543 20:20:51 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:30.543 20:20:51 json_config -- common/autotest_common.sh@829 -- # '[' -z 61314 ']' 00:04:30.543 20:20:51 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.543 20:20:51 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.543 20:20:51 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.543 20:20:51 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.543 20:20:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.802 [2024-07-15 20:20:52.065585] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:30.802 [2024-07-15 20:20:52.065834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61314 ] 00:04:31.060 [2024-07-15 20:20:52.368443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.060 [2024-07-15 20:20:52.426835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.627 20:20:53 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.627 20:20:53 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:31.627 20:20:53 json_config -- json_config/common.sh@26 -- # echo '' 00:04:31.627 00:04:31.627 20:20:53 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:31.627 20:20:53 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:31.627 20:20:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:31.627 20:20:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.627 20:20:53 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:31.627 20:20:53 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:31.627 20:20:53 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:31.627 20:20:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.884 20:20:53 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:31.884 20:20:53 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:31.884 20:20:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:32.141 20:20:53 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:32.141 20:20:53 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:32.141 20:20:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:32.141 20:20:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.141 20:20:53 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:32.141 20:20:53 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:32.141 20:20:53 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:32.141 20:20:53 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:32.141 20:20:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:32.141 20:20:53 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:32.705 20:20:53 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:32.705 20:20:53 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:32.705 20:20:53 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:32.705 20:20:53 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:32.705 20:20:53 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:32.705 20:20:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.705 20:20:54 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:32.705 20:20:54 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:32.705 20:20:54 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:32.705 20:20:54 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:32.705 20:20:54 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:32.705 20:20:54 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:32.705 20:20:54 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:32.705 20:20:54 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:32.705 20:20:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.705 20:20:54 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:32.705 20:20:54 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:32.705 20:20:54 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:32.705 20:20:54 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:32.705 20:20:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:32.963 MallocForNvmf0 00:04:32.963 20:20:54 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:32.963 20:20:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:33.221 MallocForNvmf1 00:04:33.221 20:20:54 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:33.221 20:20:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:33.479 [2024-07-15 20:20:54.926276] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:33.479 20:20:54 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:33.479 20:20:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:33.737 20:20:55 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:33.737 20:20:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:33.995 20:20:55 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:33.995 20:20:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:34.560 20:20:55 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:34.560 20:20:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:34.820 [2024-07-15 20:20:56.063041] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:34.820 20:20:56 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:34.820 20:20:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:34.820 20:20:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.820 20:20:56 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:34.820 20:20:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:34.820 20:20:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.820 20:20:56 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:34.820 20:20:56 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:34.820 20:20:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:35.078 MallocBdevForConfigChangeCheck 00:04:35.078 20:20:56 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:35.079 20:20:56 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:35.079 20:20:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.079 20:20:56 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:35.079 20:20:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.654 INFO: shutting down applications... 00:04:35.654 20:20:56 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:35.654 20:20:56 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:35.654 20:20:56 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:35.654 20:20:56 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:35.654 20:20:56 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:35.912 Calling clear_iscsi_subsystem 00:04:35.912 Calling clear_nvmf_subsystem 00:04:35.912 Calling clear_nbd_subsystem 00:04:35.912 Calling clear_ublk_subsystem 00:04:35.912 Calling clear_vhost_blk_subsystem 00:04:35.912 Calling clear_vhost_scsi_subsystem 00:04:35.912 Calling clear_bdev_subsystem 00:04:35.912 20:20:57 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:35.912 20:20:57 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:35.912 20:20:57 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:35.912 20:20:57 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.912 20:20:57 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:35.912 20:20:57 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:36.480 20:20:57 json_config -- json_config/json_config.sh@345 -- # break 00:04:36.480 20:20:57 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:36.480 20:20:57 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:36.480 20:20:57 json_config -- json_config/common.sh@31 -- # local app=target 00:04:36.480 20:20:57 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:36.480 20:20:57 json_config -- json_config/common.sh@35 -- # [[ -n 61314 ]] 00:04:36.480 20:20:57 json_config -- json_config/common.sh@38 -- # kill -SIGINT 61314 00:04:36.480 20:20:57 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:36.480 20:20:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:36.480 20:20:57 json_config -- json_config/common.sh@41 -- # kill -0 61314 00:04:36.480 20:20:57 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:36.739 20:20:58 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:36.739 20:20:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:36.739 20:20:58 json_config -- json_config/common.sh@41 -- # kill -0 61314 00:04:36.739 20:20:58 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:36.739 20:20:58 json_config -- json_config/common.sh@43 -- # break 00:04:36.739 20:20:58 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:36.739 SPDK target shutdown done 00:04:36.739 20:20:58 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:36.739 INFO: relaunching applications... 00:04:36.739 20:20:58 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:36.739 20:20:58 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:36.739 20:20:58 json_config -- json_config/common.sh@9 -- # local app=target 00:04:36.739 20:20:58 json_config -- json_config/common.sh@10 -- # shift 00:04:36.739 20:20:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:36.739 20:20:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:36.739 20:20:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:36.739 20:20:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.739 20:20:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.739 20:20:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61594 00:04:36.739 20:20:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:36.739 Waiting for target to run... 00:04:36.739 20:20:58 json_config -- json_config/common.sh@25 -- # waitforlisten 61594 /var/tmp/spdk_tgt.sock 00:04:36.739 20:20:58 json_config -- common/autotest_common.sh@829 -- # '[' -z 61594 ']' 00:04:36.739 20:20:58 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:36.739 20:20:58 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:36.739 20:20:58 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:36.739 20:20:58 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:36.739 20:20:58 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.739 20:20:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.997 [2024-07-15 20:20:58.262980] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:36.998 [2024-07-15 20:20:58.263100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61594 ] 00:04:37.256 [2024-07-15 20:20:58.562381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.256 [2024-07-15 20:20:58.620289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.515 [2024-07-15 20:20:58.937860] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:37.515 [2024-07-15 20:20:58.969932] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:38.083 20:20:59 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.083 20:20:59 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:38.083 00:04:38.083 20:20:59 json_config -- json_config/common.sh@26 -- # echo '' 00:04:38.083 20:20:59 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:38.083 INFO: Checking if target configuration is the same... 00:04:38.083 20:20:59 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:38.083 20:20:59 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:38.083 20:20:59 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:38.083 20:20:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:38.083 + '[' 2 -ne 2 ']' 00:04:38.083 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:38.083 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:38.083 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:38.083 +++ basename /dev/fd/62 00:04:38.083 ++ mktemp /tmp/62.XXX 00:04:38.083 + tmp_file_1=/tmp/62.xxd 00:04:38.083 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:38.083 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:38.083 + tmp_file_2=/tmp/spdk_tgt_config.json.7TX 00:04:38.083 + ret=0 00:04:38.083 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:38.341 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:38.341 + diff -u /tmp/62.xxd /tmp/spdk_tgt_config.json.7TX 00:04:38.341 INFO: JSON config files are the same 00:04:38.341 + echo 'INFO: JSON config files are the same' 00:04:38.341 + rm /tmp/62.xxd /tmp/spdk_tgt_config.json.7TX 00:04:38.341 + exit 0 00:04:38.341 20:20:59 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:38.341 INFO: changing configuration and checking if this can be detected... 00:04:38.341 20:20:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:38.341 20:20:59 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:38.599 20:20:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:38.858 20:21:00 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:38.858 20:21:00 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:38.858 20:21:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:38.858 + '[' 2 -ne 2 ']' 00:04:38.858 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:38.858 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:38.858 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:38.858 +++ basename /dev/fd/62 00:04:38.858 ++ mktemp /tmp/62.XXX 00:04:38.858 + tmp_file_1=/tmp/62.5hq 00:04:38.858 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:38.858 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:38.858 + tmp_file_2=/tmp/spdk_tgt_config.json.Mf2 00:04:38.858 + ret=0 00:04:38.858 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:39.117 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:39.376 + diff -u /tmp/62.5hq /tmp/spdk_tgt_config.json.Mf2 00:04:39.376 + ret=1 00:04:39.376 + echo '=== Start of file: /tmp/62.5hq ===' 00:04:39.376 + cat /tmp/62.5hq 00:04:39.376 + echo '=== End of file: /tmp/62.5hq ===' 00:04:39.376 + echo '' 00:04:39.376 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Mf2 ===' 00:04:39.376 + cat /tmp/spdk_tgt_config.json.Mf2 00:04:39.376 + echo '=== End of file: /tmp/spdk_tgt_config.json.Mf2 ===' 00:04:39.376 + echo '' 00:04:39.376 + rm /tmp/62.5hq /tmp/spdk_tgt_config.json.Mf2 00:04:39.376 + exit 1 00:04:39.376 INFO: configuration change detected. 00:04:39.376 20:21:00 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:39.376 20:21:00 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:39.376 20:21:00 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:39.376 20:21:00 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:39.376 20:21:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.376 20:21:00 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:39.376 20:21:00 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:39.376 20:21:00 json_config -- json_config/json_config.sh@317 -- # [[ -n 61594 ]] 00:04:39.376 20:21:00 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:39.376 20:21:00 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:39.376 20:21:00 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:39.376 20:21:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.376 20:21:00 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:39.376 20:21:00 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:39.376 20:21:00 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:39.376 20:21:00 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:39.376 20:21:00 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:39.376 20:21:00 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:39.376 20:21:00 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:39.376 20:21:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.376 20:21:00 json_config -- json_config/json_config.sh@323 -- # killprocess 61594 00:04:39.376 20:21:00 json_config -- common/autotest_common.sh@948 -- # '[' -z 61594 ']' 00:04:39.376 20:21:00 json_config -- common/autotest_common.sh@952 -- # kill -0 61594 00:04:39.376 20:21:00 json_config -- common/autotest_common.sh@953 -- # uname 00:04:39.376 20:21:00 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:39.376 20:21:00 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61594 00:04:39.376 20:21:00 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:39.376 20:21:00 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:39.376 20:21:00 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61594' 00:04:39.376 killing process with pid 61594 00:04:39.376 20:21:00 json_config -- common/autotest_common.sh@967 -- # kill 61594 00:04:39.376 20:21:00 json_config -- common/autotest_common.sh@972 -- # wait 61594 00:04:39.636 20:21:00 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:39.636 20:21:00 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:39.636 20:21:00 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:39.636 20:21:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.636 20:21:00 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:39.636 INFO: Success 00:04:39.636 20:21:00 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:39.636 00:04:39.636 real 0m9.032s 00:04:39.636 user 0m13.510s 00:04:39.636 sys 0m1.630s 00:04:39.636 20:21:00 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.636 20:21:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.636 ************************************ 00:04:39.636 END TEST json_config 00:04:39.636 ************************************ 00:04:39.636 20:21:00 -- common/autotest_common.sh@1142 -- # return 0 00:04:39.636 20:21:00 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:39.636 20:21:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.636 20:21:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.636 20:21:00 -- common/autotest_common.sh@10 -- # set +x 00:04:39.636 ************************************ 00:04:39.636 START TEST json_config_extra_key 00:04:39.636 ************************************ 00:04:39.636 20:21:00 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:39.636 20:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:39.636 20:21:01 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.636 20:21:01 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.636 20:21:01 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.636 20:21:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.636 20:21:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.636 20:21:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.636 20:21:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:39.636 20:21:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:39.636 20:21:01 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:39.636 20:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:39.636 20:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:39.636 20:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:39.636 20:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:39.636 20:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:39.636 20:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:39.636 20:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:39.636 20:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:39.636 20:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:39.636 20:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:39.636 INFO: launching applications... 00:04:39.636 20:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:39.636 20:21:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:39.636 20:21:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:39.636 20:21:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:39.636 20:21:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:39.636 20:21:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:39.636 20:21:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:39.636 20:21:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.636 20:21:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.636 20:21:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61764 00:04:39.636 Waiting for target to run... 00:04:39.636 20:21:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:39.636 20:21:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61764 /var/tmp/spdk_tgt.sock 00:04:39.636 20:21:01 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:39.636 20:21:01 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 61764 ']' 00:04:39.636 20:21:01 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:39.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:39.636 20:21:01 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:39.636 20:21:01 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:39.636 20:21:01 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:39.636 20:21:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:39.636 [2024-07-15 20:21:01.113441] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:39.636 [2024-07-15 20:21:01.113534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61764 ] 00:04:40.203 [2024-07-15 20:21:01.414174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.203 [2024-07-15 20:21:01.470501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.774 20:21:02 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:40.774 00:04:40.774 20:21:02 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:40.774 20:21:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:40.774 INFO: shutting down applications... 00:04:40.774 20:21:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:40.774 20:21:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:40.774 20:21:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:40.774 20:21:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:40.774 20:21:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61764 ]] 00:04:40.775 20:21:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61764 00:04:40.775 20:21:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:40.775 20:21:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.775 20:21:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61764 00:04:40.775 20:21:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:41.342 20:21:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:41.342 20:21:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.342 20:21:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61764 00:04:41.342 20:21:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:41.342 20:21:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:41.342 20:21:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:41.342 SPDK target shutdown done 00:04:41.342 20:21:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:41.342 Success 00:04:41.342 20:21:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:41.342 00:04:41.342 real 0m1.710s 00:04:41.342 user 0m1.680s 00:04:41.342 sys 0m0.304s 00:04:41.342 20:21:02 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.342 20:21:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:41.342 ************************************ 00:04:41.342 END TEST json_config_extra_key 00:04:41.342 ************************************ 00:04:41.342 20:21:02 -- common/autotest_common.sh@1142 -- # return 0 00:04:41.342 20:21:02 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:41.342 20:21:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.342 20:21:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.342 20:21:02 -- common/autotest_common.sh@10 -- # set +x 00:04:41.342 ************************************ 00:04:41.342 START TEST alias_rpc 00:04:41.342 ************************************ 00:04:41.342 20:21:02 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:41.342 * Looking for test storage... 00:04:41.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:41.342 20:21:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:41.342 20:21:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61846 00:04:41.342 20:21:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61846 00:04:41.342 20:21:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.342 20:21:02 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 61846 ']' 00:04:41.342 20:21:02 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.342 20:21:02 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.342 20:21:02 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.342 20:21:02 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.342 20:21:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.600 [2024-07-15 20:21:02.893509] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:41.600 [2024-07-15 20:21:02.893616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61846 ] 00:04:41.600 [2024-07-15 20:21:03.032684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.859 [2024-07-15 20:21:03.107779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.859 20:21:03 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.859 20:21:03 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:41.859 20:21:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:42.117 20:21:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61846 00:04:42.117 20:21:03 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 61846 ']' 00:04:42.117 20:21:03 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 61846 00:04:42.117 20:21:03 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:42.117 20:21:03 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:42.117 20:21:03 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61846 00:04:42.375 20:21:03 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:42.375 20:21:03 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:42.375 killing process with pid 61846 00:04:42.375 20:21:03 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61846' 00:04:42.375 20:21:03 alias_rpc -- common/autotest_common.sh@967 -- # kill 61846 00:04:42.375 20:21:03 alias_rpc -- common/autotest_common.sh@972 -- # wait 61846 00:04:42.375 00:04:42.375 real 0m1.133s 00:04:42.375 user 0m1.300s 00:04:42.375 sys 0m0.350s 00:04:42.375 20:21:03 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.375 20:21:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.375 ************************************ 00:04:42.375 END TEST alias_rpc 00:04:42.375 ************************************ 00:04:42.635 20:21:03 -- common/autotest_common.sh@1142 -- # return 0 00:04:42.635 20:21:03 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:04:42.635 20:21:03 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:42.635 20:21:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.635 20:21:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.635 20:21:03 -- common/autotest_common.sh@10 -- # set +x 00:04:42.635 ************************************ 00:04:42.635 START TEST dpdk_mem_utility 00:04:42.635 ************************************ 00:04:42.635 20:21:03 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:42.635 * Looking for test storage... 00:04:42.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:42.635 20:21:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:42.635 20:21:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61919 00:04:42.635 20:21:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.635 20:21:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61919 00:04:42.635 20:21:04 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 61919 ']' 00:04:42.635 20:21:04 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.635 20:21:04 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.635 20:21:04 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.635 20:21:04 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.635 20:21:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.635 [2024-07-15 20:21:04.069270] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:42.635 [2024-07-15 20:21:04.069373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61919 ] 00:04:42.895 [2024-07-15 20:21:04.205783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.895 [2024-07-15 20:21:04.281297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.833 20:21:05 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.833 20:21:05 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:43.833 20:21:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:43.833 20:21:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:43.833 20:21:05 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.833 20:21:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.833 { 00:04:43.833 "filename": "/tmp/spdk_mem_dump.txt" 00:04:43.833 } 00:04:43.833 20:21:05 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.833 20:21:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:43.833 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:43.833 1 heaps totaling size 814.000000 MiB 00:04:43.833 size: 814.000000 MiB heap id: 0 00:04:43.833 end heaps---------- 00:04:43.833 8 mempools totaling size 598.116089 MiB 00:04:43.833 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:43.833 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:43.833 size: 84.521057 MiB name: bdev_io_61919 00:04:43.833 size: 51.011292 MiB name: evtpool_61919 00:04:43.833 size: 50.003479 MiB name: msgpool_61919 00:04:43.833 size: 21.763794 MiB name: PDU_Pool 00:04:43.833 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:43.833 size: 0.026123 MiB name: Session_Pool 00:04:43.833 end mempools------- 00:04:43.833 6 memzones totaling size 4.142822 MiB 00:04:43.833 size: 1.000366 MiB name: RG_ring_0_61919 00:04:43.833 size: 1.000366 MiB name: RG_ring_1_61919 00:04:43.833 size: 1.000366 MiB name: RG_ring_4_61919 00:04:43.833 size: 1.000366 MiB name: RG_ring_5_61919 00:04:43.833 size: 0.125366 MiB name: RG_ring_2_61919 00:04:43.833 size: 0.015991 MiB name: RG_ring_3_61919 00:04:43.833 end memzones------- 00:04:43.833 20:21:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:43.833 heap id: 0 total size: 814.000000 MiB number of busy elements: 225 number of free elements: 15 00:04:43.833 list of free elements. size: 12.485657 MiB 00:04:43.833 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:43.833 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:43.833 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:43.833 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:43.833 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:43.833 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:43.833 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:43.833 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:43.833 element at address: 0x200000200000 with size: 0.837036 MiB 00:04:43.833 element at address: 0x20001aa00000 with size: 0.571899 MiB 00:04:43.833 element at address: 0x20000b200000 with size: 0.489807 MiB 00:04:43.833 element at address: 0x200000800000 with size: 0.487061 MiB 00:04:43.833 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:43.833 element at address: 0x200027e00000 with size: 0.398315 MiB 00:04:43.833 element at address: 0x200003a00000 with size: 0.350769 MiB 00:04:43.833 list of standard malloc elements. size: 199.251770 MiB 00:04:43.833 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:43.833 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:43.833 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:43.833 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:43.833 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:43.833 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:43.833 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:43.833 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:43.833 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:43.833 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:43.833 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:43.833 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:43.833 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:43.834 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e66040 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6cc40 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:43.834 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:43.834 list of memzone associated elements. size: 602.262573 MiB 00:04:43.834 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:43.834 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:43.834 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:43.834 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:43.834 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:43.834 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61919_0 00:04:43.834 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:43.834 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61919_0 00:04:43.834 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:43.834 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61919_0 00:04:43.834 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:43.834 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:43.834 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:43.834 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:43.834 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:43.834 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61919 00:04:43.834 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:43.834 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61919 00:04:43.834 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:43.834 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61919 00:04:43.834 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:43.834 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:43.834 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:43.834 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:43.834 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:43.834 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:43.834 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:43.834 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:43.834 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:43.834 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61919 00:04:43.834 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:43.834 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61919 00:04:43.834 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:43.834 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61919 00:04:43.834 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:43.834 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61919 00:04:43.834 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:43.834 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61919 00:04:43.834 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:43.835 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:43.835 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:43.835 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:43.835 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:43.835 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:43.835 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:43.835 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61919 00:04:43.835 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:43.835 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:43.835 element at address: 0x200027e66100 with size: 0.023743 MiB 00:04:43.835 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:43.835 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:43.835 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61919 00:04:43.835 element at address: 0x200027e6c240 with size: 0.002441 MiB 00:04:43.835 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:43.835 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:04:43.835 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61919 00:04:43.835 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:43.835 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61919 00:04:43.835 element at address: 0x200027e6cd00 with size: 0.000305 MiB 00:04:43.835 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:43.835 20:21:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:43.835 20:21:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61919 00:04:43.835 20:21:05 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 61919 ']' 00:04:43.835 20:21:05 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 61919 00:04:43.835 20:21:05 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:43.835 20:21:05 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:43.835 20:21:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61919 00:04:43.835 20:21:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:43.835 20:21:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:43.835 killing process with pid 61919 00:04:43.835 20:21:05 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61919' 00:04:43.835 20:21:05 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 61919 00:04:43.835 20:21:05 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 61919 00:04:44.093 ************************************ 00:04:44.093 END TEST dpdk_mem_utility 00:04:44.093 ************************************ 00:04:44.093 00:04:44.093 real 0m1.586s 00:04:44.093 user 0m1.864s 00:04:44.093 sys 0m0.341s 00:04:44.093 20:21:05 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.093 20:21:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:44.093 20:21:05 -- common/autotest_common.sh@1142 -- # return 0 00:04:44.093 20:21:05 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:44.093 20:21:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.093 20:21:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.093 20:21:05 -- common/autotest_common.sh@10 -- # set +x 00:04:44.093 ************************************ 00:04:44.093 START TEST event 00:04:44.093 ************************************ 00:04:44.093 20:21:05 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:44.352 * Looking for test storage... 00:04:44.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:44.352 20:21:05 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:44.352 20:21:05 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:44.352 20:21:05 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:44.352 20:21:05 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:44.352 20:21:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.352 20:21:05 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.352 ************************************ 00:04:44.352 START TEST event_perf 00:04:44.352 ************************************ 00:04:44.352 20:21:05 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:44.352 Running I/O for 1 seconds...[2024-07-15 20:21:05.659219] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:44.352 [2024-07-15 20:21:05.659311] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62014 ] 00:04:44.352 [2024-07-15 20:21:05.797106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:44.610 [2024-07-15 20:21:05.873543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.610 [2024-07-15 20:21:05.873643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.610 Running I/O for 1 seconds...[2024-07-15 20:21:05.873770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:44.610 [2024-07-15 20:21:05.873775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.546 00:04:45.546 lcore 0: 185206 00:04:45.546 lcore 1: 185205 00:04:45.546 lcore 2: 185207 00:04:45.546 lcore 3: 185206 00:04:45.546 done. 00:04:45.546 00:04:45.546 real 0m1.307s 00:04:45.546 user 0m4.127s 00:04:45.546 sys 0m0.054s 00:04:45.546 20:21:06 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.546 20:21:06 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:45.546 ************************************ 00:04:45.546 END TEST event_perf 00:04:45.546 ************************************ 00:04:45.546 20:21:06 event -- common/autotest_common.sh@1142 -- # return 0 00:04:45.546 20:21:06 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:45.546 20:21:06 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:45.546 20:21:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.546 20:21:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.546 ************************************ 00:04:45.546 START TEST event_reactor 00:04:45.546 ************************************ 00:04:45.546 20:21:07 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:45.546 [2024-07-15 20:21:07.018139] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:45.546 [2024-07-15 20:21:07.018233] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62047 ] 00:04:45.804 [2024-07-15 20:21:07.157070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.805 [2024-07-15 20:21:07.227923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.181 test_start 00:04:47.181 oneshot 00:04:47.181 tick 100 00:04:47.181 tick 100 00:04:47.181 tick 250 00:04:47.181 tick 100 00:04:47.181 tick 100 00:04:47.181 tick 100 00:04:47.181 tick 500 00:04:47.181 tick 250 00:04:47.181 tick 100 00:04:47.181 tick 100 00:04:47.181 tick 250 00:04:47.181 tick 100 00:04:47.181 tick 100 00:04:47.181 test_end 00:04:47.181 00:04:47.181 real 0m1.296s 00:04:47.181 user 0m1.145s 00:04:47.181 sys 0m0.044s 00:04:47.181 20:21:08 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.182 20:21:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:47.182 ************************************ 00:04:47.182 END TEST event_reactor 00:04:47.182 ************************************ 00:04:47.182 20:21:08 event -- common/autotest_common.sh@1142 -- # return 0 00:04:47.182 20:21:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:47.182 20:21:08 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:47.182 20:21:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.182 20:21:08 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.182 ************************************ 00:04:47.182 START TEST event_reactor_perf 00:04:47.182 ************************************ 00:04:47.182 20:21:08 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:47.182 [2024-07-15 20:21:08.364185] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:47.182 [2024-07-15 20:21:08.364298] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62083 ] 00:04:47.182 [2024-07-15 20:21:08.501364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.182 [2024-07-15 20:21:08.566160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.559 test_start 00:04:48.559 test_end 00:04:48.559 Performance: 335496 events per second 00:04:48.559 00:04:48.559 real 0m1.290s 00:04:48.559 user 0m1.144s 00:04:48.559 sys 0m0.039s 00:04:48.559 20:21:09 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.559 ************************************ 00:04:48.559 20:21:09 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:48.559 END TEST event_reactor_perf 00:04:48.559 ************************************ 00:04:48.559 20:21:09 event -- common/autotest_common.sh@1142 -- # return 0 00:04:48.559 20:21:09 event -- event/event.sh@49 -- # uname -s 00:04:48.559 20:21:09 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:48.559 20:21:09 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:48.559 20:21:09 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.559 20:21:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.559 20:21:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.559 ************************************ 00:04:48.559 START TEST event_scheduler 00:04:48.559 ************************************ 00:04:48.559 20:21:09 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:48.559 * Looking for test storage... 00:04:48.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:48.559 20:21:09 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:48.559 20:21:09 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=62144 00:04:48.559 20:21:09 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:48.559 20:21:09 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.559 20:21:09 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 62144 00:04:48.559 20:21:09 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 62144 ']' 00:04:48.559 20:21:09 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.559 20:21:09 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.559 20:21:09 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.559 20:21:09 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.559 20:21:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.559 [2024-07-15 20:21:09.821109] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:48.559 [2024-07-15 20:21:09.821218] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62144 ] 00:04:48.559 [2024-07-15 20:21:09.960085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:48.559 [2024-07-15 20:21:10.023027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.559 [2024-07-15 20:21:10.023113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.559 [2024-07-15 20:21:10.023243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:48.559 [2024-07-15 20:21:10.023463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.820 20:21:10 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.820 20:21:10 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:48.820 20:21:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:48.820 20:21:10 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.820 20:21:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.820 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.820 POWER: Cannot set governor of lcore 0 to userspace 00:04:48.820 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.820 POWER: Cannot set governor of lcore 0 to performance 00:04:48.820 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.820 POWER: Cannot set governor of lcore 0 to userspace 00:04:48.820 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:48.820 POWER: Cannot set governor of lcore 0 to userspace 00:04:48.820 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:48.820 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:48.820 POWER: Unable to set Power Management Environment for lcore 0 00:04:48.820 [2024-07-15 20:21:10.071373] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:04:48.820 [2024-07-15 20:21:10.071578] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:04:48.820 [2024-07-15 20:21:10.071890] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:48.820 [2024-07-15 20:21:10.072079] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:48.820 [2024-07-15 20:21:10.072363] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:48.820 [2024-07-15 20:21:10.072449] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:48.820 20:21:10 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.820 20:21:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:48.820 20:21:10 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.820 20:21:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.820 [2024-07-15 20:21:10.130297] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:48.820 20:21:10 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.820 20:21:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:48.820 20:21:10 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.820 20:21:10 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.820 20:21:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:48.820 ************************************ 00:04:48.820 START TEST scheduler_create_thread 00:04:48.820 ************************************ 00:04:48.820 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:48.820 20:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:48.820 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.820 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.821 2 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.821 3 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.821 4 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.821 5 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.821 6 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.821 7 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.821 8 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.821 9 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.821 10 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.821 20:21:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.197 20:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:50.197 20:21:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:50.197 20:21:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:50.197 20:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.197 20:21:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.571 20:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.571 00:04:51.571 real 0m2.613s 00:04:51.571 user 0m0.016s 00:04:51.571 sys 0m0.008s 00:04:51.571 20:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.571 20:21:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.571 ************************************ 00:04:51.571 END TEST scheduler_create_thread 00:04:51.571 ************************************ 00:04:51.571 20:21:12 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:51.571 20:21:12 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:51.571 20:21:12 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 62144 00:04:51.571 20:21:12 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 62144 ']' 00:04:51.571 20:21:12 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 62144 00:04:51.571 20:21:12 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:51.571 20:21:12 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.571 20:21:12 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62144 00:04:51.571 20:21:12 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:51.571 killing process with pid 62144 00:04:51.571 20:21:12 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:51.571 20:21:12 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62144' 00:04:51.571 20:21:12 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 62144 00:04:51.571 20:21:12 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 62144 00:04:51.829 [2024-07-15 20:21:13.234480] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:52.087 00:04:52.087 real 0m3.719s 00:04:52.087 user 0m5.486s 00:04:52.087 sys 0m0.278s 00:04:52.087 20:21:13 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.087 20:21:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:52.087 ************************************ 00:04:52.087 END TEST event_scheduler 00:04:52.087 ************************************ 00:04:52.087 20:21:13 event -- common/autotest_common.sh@1142 -- # return 0 00:04:52.087 20:21:13 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:52.087 20:21:13 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:52.087 20:21:13 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.087 20:21:13 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.087 20:21:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.087 ************************************ 00:04:52.087 START TEST app_repeat 00:04:52.087 ************************************ 00:04:52.087 20:21:13 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:52.087 20:21:13 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.087 20:21:13 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.087 20:21:13 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:52.087 20:21:13 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.087 20:21:13 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:52.087 20:21:13 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:52.087 20:21:13 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:52.087 Process app_repeat pid: 62243 00:04:52.087 spdk_app_start Round 0 00:04:52.087 20:21:13 event.app_repeat -- event/event.sh@19 -- # repeat_pid=62243 00:04:52.087 20:21:13 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.087 20:21:13 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62243' 00:04:52.087 20:21:13 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:52.087 20:21:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:52.087 20:21:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:52.087 20:21:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62243 /var/tmp/spdk-nbd.sock 00:04:52.087 20:21:13 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62243 ']' 00:04:52.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.087 20:21:13 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.087 20:21:13 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.087 20:21:13 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.087 20:21:13 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.087 20:21:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.087 [2024-07-15 20:21:13.490304] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:04:52.087 [2024-07-15 20:21:13.490408] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62243 ] 00:04:52.345 [2024-07-15 20:21:13.630072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:52.345 [2024-07-15 20:21:13.703186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.345 [2024-07-15 20:21:13.703199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.345 20:21:13 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.345 20:21:13 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:52.345 20:21:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.603 Malloc0 00:04:52.603 20:21:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.168 Malloc1 00:04:53.168 20:21:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.168 20:21:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.168 20:21:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.168 20:21:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:53.168 20:21:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.168 20:21:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:53.168 20:21:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.168 20:21:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.168 20:21:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.168 20:21:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:53.168 20:21:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.168 20:21:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:53.168 20:21:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:53.168 20:21:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:53.168 20:21:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.168 20:21:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:53.168 /dev/nbd0 00:04:53.168 20:21:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:53.168 20:21:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:53.168 20:21:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:53.168 20:21:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:53.168 20:21:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:53.168 20:21:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:53.168 20:21:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:53.427 20:21:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:53.427 20:21:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:53.427 20:21:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:53.427 20:21:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.427 1+0 records in 00:04:53.427 1+0 records out 00:04:53.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236095 s, 17.3 MB/s 00:04:53.427 20:21:14 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.427 20:21:14 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:53.427 20:21:14 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.427 20:21:14 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:53.427 20:21:14 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:53.427 20:21:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.427 20:21:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.427 20:21:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:53.684 /dev/nbd1 00:04:53.684 20:21:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:53.684 20:21:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:53.684 20:21:14 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:53.684 20:21:14 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:53.684 20:21:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:53.684 20:21:14 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:53.684 20:21:14 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:53.684 20:21:14 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:53.684 20:21:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:53.684 20:21:14 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:53.684 20:21:14 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.684 1+0 records in 00:04:53.684 1+0 records out 00:04:53.684 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353487 s, 11.6 MB/s 00:04:53.684 20:21:15 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.684 20:21:15 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:53.684 20:21:15 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:53.684 20:21:15 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:53.684 20:21:15 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:53.684 20:21:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.684 20:21:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.684 20:21:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.684 20:21:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.684 20:21:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.941 20:21:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:53.941 { 00:04:53.941 "bdev_name": "Malloc0", 00:04:53.941 "nbd_device": "/dev/nbd0" 00:04:53.941 }, 00:04:53.941 { 00:04:53.941 "bdev_name": "Malloc1", 00:04:53.942 "nbd_device": "/dev/nbd1" 00:04:53.942 } 00:04:53.942 ]' 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:53.942 { 00:04:53.942 "bdev_name": "Malloc0", 00:04:53.942 "nbd_device": "/dev/nbd0" 00:04:53.942 }, 00:04:53.942 { 00:04:53.942 "bdev_name": "Malloc1", 00:04:53.942 "nbd_device": "/dev/nbd1" 00:04:53.942 } 00:04:53.942 ]' 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:53.942 /dev/nbd1' 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:53.942 /dev/nbd1' 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:53.942 256+0 records in 00:04:53.942 256+0 records out 00:04:53.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0059146 s, 177 MB/s 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:53.942 256+0 records in 00:04:53.942 256+0 records out 00:04:53.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255704 s, 41.0 MB/s 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:53.942 256+0 records in 00:04:53.942 256+0 records out 00:04:53.942 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026887 s, 39.0 MB/s 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.942 20:21:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:54.199 20:21:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.199 20:21:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:54.199 20:21:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:54.199 20:21:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:54.199 20:21:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.199 20:21:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.199 20:21:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:54.199 20:21:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:54.199 20:21:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.199 20:21:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:54.456 20:21:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:54.456 20:21:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:54.456 20:21:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:54.456 20:21:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.456 20:21:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.456 20:21:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:54.456 20:21:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.456 20:21:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.456 20:21:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.456 20:21:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:54.713 20:21:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:54.713 20:21:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:54.713 20:21:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:54.713 20:21:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.713 20:21:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.713 20:21:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:54.713 20:21:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.713 20:21:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.713 20:21:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.713 20:21:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.713 20:21:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.970 20:21:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:54.970 20:21:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:54.970 20:21:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.970 20:21:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:54.970 20:21:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:54.970 20:21:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.970 20:21:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:54.970 20:21:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:54.970 20:21:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:54.970 20:21:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:54.970 20:21:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:54.970 20:21:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:54.970 20:21:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:55.227 20:21:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:55.486 [2024-07-15 20:21:16.822991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.486 [2024-07-15 20:21:16.891273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.486 [2024-07-15 20:21:16.891281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.486 [2024-07-15 20:21:16.924472] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:55.486 [2024-07-15 20:21:16.924521] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:58.771 20:21:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:58.771 spdk_app_start Round 1 00:04:58.771 20:21:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:58.771 20:21:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62243 /var/tmp/spdk-nbd.sock 00:04:58.771 20:21:19 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62243 ']' 00:04:58.771 20:21:19 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.771 20:21:19 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.771 20:21:19 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.771 20:21:19 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.771 20:21:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:58.771 20:21:19 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.771 20:21:19 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:58.771 20:21:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.771 Malloc0 00:04:59.030 20:21:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.382 Malloc1 00:04:59.382 20:21:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.382 20:21:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.382 20:21:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.382 20:21:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:59.382 20:21:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.382 20:21:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:59.382 20:21:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.382 20:21:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.382 20:21:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.382 20:21:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:59.382 20:21:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.382 20:21:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:59.382 20:21:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:59.382 20:21:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:59.382 20:21:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.382 20:21:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.642 /dev/nbd0 00:04:59.642 20:21:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.642 20:21:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.642 20:21:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:59.642 20:21:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:59.642 20:21:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:59.642 20:21:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:59.642 20:21:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:59.642 20:21:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:59.642 20:21:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:59.642 20:21:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:59.642 20:21:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.642 1+0 records in 00:04:59.642 1+0 records out 00:04:59.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438528 s, 9.3 MB/s 00:04:59.642 20:21:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.642 20:21:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:59.642 20:21:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.642 20:21:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:59.642 20:21:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:59.642 20:21:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.642 20:21:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.642 20:21:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.900 /dev/nbd1 00:04:59.900 20:21:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.900 20:21:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.900 20:21:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:59.900 20:21:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:59.900 20:21:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:59.900 20:21:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:59.900 20:21:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:59.900 20:21:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:59.900 20:21:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:59.900 20:21:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:59.900 20:21:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.900 1+0 records in 00:04:59.900 1+0 records out 00:04:59.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000707345 s, 5.8 MB/s 00:04:59.900 20:21:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.900 20:21:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:59.900 20:21:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:59.900 20:21:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:59.900 20:21:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:59.900 20:21:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.900 20:21:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.900 20:21:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.900 20:21:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.900 20:21:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.159 20:21:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:00.159 { 00:05:00.159 "bdev_name": "Malloc0", 00:05:00.159 "nbd_device": "/dev/nbd0" 00:05:00.159 }, 00:05:00.159 { 00:05:00.159 "bdev_name": "Malloc1", 00:05:00.159 "nbd_device": "/dev/nbd1" 00:05:00.159 } 00:05:00.159 ]' 00:05:00.159 20:21:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:00.159 { 00:05:00.159 "bdev_name": "Malloc0", 00:05:00.159 "nbd_device": "/dev/nbd0" 00:05:00.159 }, 00:05:00.159 { 00:05:00.159 "bdev_name": "Malloc1", 00:05:00.159 "nbd_device": "/dev/nbd1" 00:05:00.159 } 00:05:00.159 ]' 00:05:00.159 20:21:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.159 20:21:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:00.159 /dev/nbd1' 00:05:00.159 20:21:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.159 20:21:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:00.159 /dev/nbd1' 00:05:00.159 20:21:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:00.159 20:21:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:00.159 20:21:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:00.159 20:21:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:00.159 20:21:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:00.159 20:21:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.159 20:21:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.159 20:21:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:00.159 20:21:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.159 20:21:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:00.159 20:21:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:00.159 256+0 records in 00:05:00.159 256+0 records out 00:05:00.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00625568 s, 168 MB/s 00:05:00.159 20:21:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.159 20:21:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:00.159 256+0 records in 00:05:00.159 256+0 records out 00:05:00.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02527 s, 41.5 MB/s 00:05:00.159 20:21:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.160 20:21:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:00.419 256+0 records in 00:05:00.419 256+0 records out 00:05:00.419 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277089 s, 37.8 MB/s 00:05:00.419 20:21:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:00.419 20:21:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.419 20:21:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.419 20:21:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:00.419 20:21:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.419 20:21:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:00.419 20:21:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:00.419 20:21:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.419 20:21:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:00.419 20:21:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.419 20:21:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:00.419 20:21:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:00.419 20:21:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:00.419 20:21:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.419 20:21:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.419 20:21:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:00.419 20:21:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:00.419 20:21:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.419 20:21:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:00.677 20:21:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:00.677 20:21:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:00.677 20:21:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:00.677 20:21:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.677 20:21:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.677 20:21:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:00.677 20:21:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.677 20:21:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.677 20:21:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.677 20:21:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:00.935 20:21:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:00.935 20:21:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:00.935 20:21:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:00.935 20:21:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.935 20:21:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.935 20:21:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:00.935 20:21:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.935 20:21:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.935 20:21:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.935 20:21:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.935 20:21:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:01.193 20:21:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:01.193 20:21:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:01.193 20:21:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:01.193 20:21:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:01.193 20:21:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:01.193 20:21:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:01.193 20:21:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:01.193 20:21:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:01.193 20:21:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:01.193 20:21:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:01.193 20:21:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:01.193 20:21:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:01.193 20:21:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.450 20:21:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:01.707 [2024-07-15 20:21:23.073220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.707 [2024-07-15 20:21:23.133640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.707 [2024-07-15 20:21:23.133650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.707 [2024-07-15 20:21:23.164480] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:01.707 [2024-07-15 20:21:23.164534] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:04.989 spdk_app_start Round 2 00:05:04.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.989 20:21:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:04.989 20:21:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:04.989 20:21:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62243 /var/tmp/spdk-nbd.sock 00:05:04.989 20:21:25 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62243 ']' 00:05:04.989 20:21:25 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.989 20:21:25 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.989 20:21:25 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.989 20:21:25 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.989 20:21:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.989 20:21:26 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.989 20:21:26 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:04.989 20:21:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.246 Malloc0 00:05:05.247 20:21:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.506 Malloc1 00:05:05.506 20:21:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.506 20:21:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.506 20:21:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.506 20:21:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:05.506 20:21:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.506 20:21:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:05.506 20:21:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:05.506 20:21:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.506 20:21:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.506 20:21:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:05.506 20:21:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.506 20:21:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:05.506 20:21:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:05.506 20:21:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:05.506 20:21:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.506 20:21:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:05.765 /dev/nbd0 00:05:05.765 20:21:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:05.765 20:21:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:05.765 20:21:27 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:05.765 20:21:27 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:05.765 20:21:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:05.765 20:21:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:05.765 20:21:27 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:05.765 20:21:27 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:05.765 20:21:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:05.765 20:21:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:05.765 20:21:27 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.765 1+0 records in 00:05:05.765 1+0 records out 00:05:05.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210558 s, 19.5 MB/s 00:05:05.765 20:21:27 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.765 20:21:27 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:05.765 20:21:27 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.765 20:21:27 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:05.765 20:21:27 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:05.765 20:21:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.765 20:21:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.765 20:21:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.024 /dev/nbd1 00:05:06.024 20:21:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.024 20:21:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.024 20:21:27 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:06.024 20:21:27 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:06.024 20:21:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:06.024 20:21:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:06.024 20:21:27 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:06.024 20:21:27 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:06.024 20:21:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:06.024 20:21:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:06.024 20:21:27 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.024 1+0 records in 00:05:06.024 1+0 records out 00:05:06.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00289013 s, 1.4 MB/s 00:05:06.024 20:21:27 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.024 20:21:27 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:06.024 20:21:27 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.024 20:21:27 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:06.024 20:21:27 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:06.024 20:21:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.024 20:21:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.024 20:21:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.024 20:21:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.024 20:21:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.282 20:21:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.282 { 00:05:06.282 "bdev_name": "Malloc0", 00:05:06.282 "nbd_device": "/dev/nbd0" 00:05:06.282 }, 00:05:06.282 { 00:05:06.282 "bdev_name": "Malloc1", 00:05:06.282 "nbd_device": "/dev/nbd1" 00:05:06.282 } 00:05:06.282 ]' 00:05:06.282 20:21:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.282 { 00:05:06.282 "bdev_name": "Malloc0", 00:05:06.282 "nbd_device": "/dev/nbd0" 00:05:06.282 }, 00:05:06.282 { 00:05:06.282 "bdev_name": "Malloc1", 00:05:06.282 "nbd_device": "/dev/nbd1" 00:05:06.282 } 00:05:06.282 ]' 00:05:06.282 20:21:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.539 20:21:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.539 /dev/nbd1' 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.540 /dev/nbd1' 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.540 256+0 records in 00:05:06.540 256+0 records out 00:05:06.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00665829 s, 157 MB/s 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.540 256+0 records in 00:05:06.540 256+0 records out 00:05:06.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253565 s, 41.4 MB/s 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:06.540 256+0 records in 00:05:06.540 256+0 records out 00:05:06.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.031531 s, 33.3 MB/s 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.540 20:21:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:06.798 20:21:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:06.798 20:21:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:06.798 20:21:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:06.798 20:21:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.798 20:21:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.798 20:21:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:06.798 20:21:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.798 20:21:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.798 20:21:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:06.798 20:21:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.055 20:21:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.055 20:21:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.055 20:21:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.055 20:21:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.055 20:21:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.055 20:21:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.055 20:21:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.055 20:21:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.055 20:21:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.055 20:21:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.055 20:21:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.313 20:21:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.313 20:21:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.313 20:21:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.570 20:21:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:07.570 20:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.570 20:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:07.570 20:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:07.570 20:21:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:07.570 20:21:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:07.570 20:21:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:07.570 20:21:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:07.570 20:21:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:07.570 20:21:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:07.828 20:21:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.828 [2024-07-15 20:21:29.250993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.828 [2024-07-15 20:21:29.308730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.828 [2024-07-15 20:21:29.308742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.085 [2024-07-15 20:21:29.337651] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:08.085 [2024-07-15 20:21:29.337721] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.365 20:21:32 event.app_repeat -- event/event.sh@38 -- # waitforlisten 62243 /var/tmp/spdk-nbd.sock 00:05:11.365 20:21:32 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 62243 ']' 00:05:11.365 20:21:32 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.365 20:21:32 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.365 20:21:32 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.365 20:21:32 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.365 20:21:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.365 20:21:32 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.365 20:21:32 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:11.365 20:21:32 event.app_repeat -- event/event.sh@39 -- # killprocess 62243 00:05:11.365 20:21:32 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 62243 ']' 00:05:11.365 20:21:32 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 62243 00:05:11.365 20:21:32 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:11.365 20:21:32 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:11.365 20:21:32 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62243 00:05:11.365 killing process with pid 62243 00:05:11.365 20:21:32 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:11.365 20:21:32 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:11.365 20:21:32 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62243' 00:05:11.365 20:21:32 event.app_repeat -- common/autotest_common.sh@967 -- # kill 62243 00:05:11.365 20:21:32 event.app_repeat -- common/autotest_common.sh@972 -- # wait 62243 00:05:11.365 spdk_app_start is called in Round 0. 00:05:11.365 Shutdown signal received, stop current app iteration 00:05:11.365 Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 reinitialization... 00:05:11.365 spdk_app_start is called in Round 1. 00:05:11.365 Shutdown signal received, stop current app iteration 00:05:11.365 Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 reinitialization... 00:05:11.365 spdk_app_start is called in Round 2. 00:05:11.365 Shutdown signal received, stop current app iteration 00:05:11.365 Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 reinitialization... 00:05:11.365 spdk_app_start is called in Round 3. 00:05:11.365 Shutdown signal received, stop current app iteration 00:05:11.365 20:21:32 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:11.365 20:21:32 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:11.365 00:05:11.365 real 0m19.092s 00:05:11.365 user 0m43.711s 00:05:11.365 sys 0m2.872s 00:05:11.365 20:21:32 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.365 20:21:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.365 ************************************ 00:05:11.365 END TEST app_repeat 00:05:11.365 ************************************ 00:05:11.365 20:21:32 event -- common/autotest_common.sh@1142 -- # return 0 00:05:11.365 20:21:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:11.365 20:21:32 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:11.365 20:21:32 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.365 20:21:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.365 20:21:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.365 ************************************ 00:05:11.365 START TEST cpu_locks 00:05:11.365 ************************************ 00:05:11.365 20:21:32 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:11.365 * Looking for test storage... 00:05:11.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:11.365 20:21:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:11.365 20:21:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:11.365 20:21:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:11.365 20:21:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:11.365 20:21:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.365 20:21:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.365 20:21:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.365 ************************************ 00:05:11.365 START TEST default_locks 00:05:11.365 ************************************ 00:05:11.365 20:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:11.365 20:21:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62859 00:05:11.365 20:21:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.365 20:21:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62859 00:05:11.365 20:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62859 ']' 00:05:11.365 20:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.365 20:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.365 20:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.365 20:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.365 20:21:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.365 [2024-07-15 20:21:32.762116] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:11.365 [2024-07-15 20:21:32.762202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62859 ] 00:05:11.623 [2024-07-15 20:21:32.896603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.623 [2024-07-15 20:21:32.964405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.556 20:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.556 20:21:33 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:12.556 20:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62859 00:05:12.556 20:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62859 00:05:12.556 20:21:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.814 20:21:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62859 00:05:12.814 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 62859 ']' 00:05:12.814 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 62859 00:05:12.814 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:12.814 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.814 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62859 00:05:12.814 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.814 killing process with pid 62859 00:05:12.814 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.814 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62859' 00:05:12.814 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 62859 00:05:12.814 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 62859 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62859 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62859 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 62859 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 62859 ']' 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.073 ERROR: process (pid: 62859) is no longer running 00:05:13.073 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62859) - No such process 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:13.073 00:05:13.073 real 0m1.795s 00:05:13.073 user 0m2.086s 00:05:13.073 sys 0m0.470s 00:05:13.073 ************************************ 00:05:13.073 END TEST default_locks 00:05:13.073 ************************************ 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.073 20:21:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.073 20:21:34 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:13.073 20:21:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:13.073 20:21:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.073 20:21:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.073 20:21:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.073 ************************************ 00:05:13.073 START TEST default_locks_via_rpc 00:05:13.073 ************************************ 00:05:13.073 20:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:13.073 20:21:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62918 00:05:13.073 20:21:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62918 00:05:13.073 20:21:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.073 20:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 62918 ']' 00:05:13.073 20:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.073 20:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.073 20:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.073 20:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.073 20:21:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.331 [2024-07-15 20:21:34.611830] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:13.331 [2024-07-15 20:21:34.611942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62918 ] 00:05:13.331 [2024-07-15 20:21:34.744922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.331 [2024-07-15 20:21:34.824650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.266 20:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.266 20:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:14.266 20:21:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:14.266 20:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.266 20:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.266 20:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.266 20:21:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:14.266 20:21:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:14.266 20:21:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:14.266 20:21:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:14.266 20:21:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:14.266 20:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.266 20:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.266 20:21:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.266 20:21:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62918 00:05:14.266 20:21:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62918 00:05:14.266 20:21:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.855 20:21:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62918 00:05:14.855 20:21:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 62918 ']' 00:05:14.855 20:21:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 62918 00:05:14.855 20:21:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:14.855 20:21:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.855 20:21:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62918 00:05:14.855 20:21:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:14.855 20:21:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:14.855 killing process with pid 62918 00:05:14.855 20:21:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62918' 00:05:14.855 20:21:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 62918 00:05:14.855 20:21:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 62918 00:05:14.855 00:05:14.855 real 0m1.762s 00:05:14.855 user 0m2.043s 00:05:14.855 sys 0m0.443s 00:05:14.855 20:21:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.855 20:21:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.855 ************************************ 00:05:14.855 END TEST default_locks_via_rpc 00:05:14.855 ************************************ 00:05:14.855 20:21:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:14.855 20:21:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:14.855 20:21:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.855 20:21:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.855 20:21:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.113 ************************************ 00:05:15.113 START TEST non_locking_app_on_locked_coremask 00:05:15.113 ************************************ 00:05:15.113 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:15.113 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62987 00:05:15.113 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 62987 /var/tmp/spdk.sock 00:05:15.113 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.113 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 62987 ']' 00:05:15.113 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.113 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.113 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.113 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.113 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.113 [2024-07-15 20:21:36.426228] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:15.113 [2024-07-15 20:21:36.426338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62987 ] 00:05:15.113 [2024-07-15 20:21:36.561182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.371 [2024-07-15 20:21:36.629480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.371 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.371 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:15.371 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:15.371 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63001 00:05:15.371 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63001 /var/tmp/spdk2.sock 00:05:15.371 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63001 ']' 00:05:15.371 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.371 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.371 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.371 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.371 20:21:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.628 [2024-07-15 20:21:36.876734] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:15.628 [2024-07-15 20:21:36.876808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63001 ] 00:05:15.628 [2024-07-15 20:21:37.018360] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.628 [2024-07-15 20:21:37.018405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.886 [2024-07-15 20:21:37.140788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.451 20:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.451 20:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:16.451 20:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 62987 00:05:16.451 20:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62987 00:05:16.451 20:21:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.384 20:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 62987 00:05:17.384 20:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 62987 ']' 00:05:17.384 20:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 62987 00:05:17.384 20:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:17.384 20:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.384 20:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62987 00:05:17.384 20:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.384 20:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.384 killing process with pid 62987 00:05:17.384 20:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62987' 00:05:17.384 20:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 62987 00:05:17.384 20:21:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 62987 00:05:17.949 20:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63001 00:05:17.949 20:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63001 ']' 00:05:17.949 20:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63001 00:05:17.949 20:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:17.949 20:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.949 20:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63001 00:05:17.949 20:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.949 20:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.949 20:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63001' 00:05:17.949 killing process with pid 63001 00:05:17.949 20:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63001 00:05:17.949 20:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63001 00:05:18.207 ************************************ 00:05:18.207 END TEST non_locking_app_on_locked_coremask 00:05:18.207 ************************************ 00:05:18.207 00:05:18.207 real 0m3.180s 00:05:18.207 user 0m3.739s 00:05:18.207 sys 0m0.877s 00:05:18.207 20:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.207 20:21:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.207 20:21:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:18.207 20:21:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:18.207 20:21:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.207 20:21:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.207 20:21:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.207 ************************************ 00:05:18.207 START TEST locking_app_on_unlocked_coremask 00:05:18.207 ************************************ 00:05:18.207 20:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:18.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.207 20:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63075 00:05:18.207 20:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:18.207 20:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63075 /var/tmp/spdk.sock 00:05:18.207 20:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63075 ']' 00:05:18.207 20:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.207 20:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.207 20:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.207 20:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.207 20:21:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.207 [2024-07-15 20:21:39.655439] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:18.207 [2024-07-15 20:21:39.655534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63075 ] 00:05:18.465 [2024-07-15 20:21:39.790064] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.465 [2024-07-15 20:21:39.790120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.465 [2024-07-15 20:21:39.852162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.723 20:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.723 20:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:18.723 20:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63095 00:05:18.723 20:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 63095 /var/tmp/spdk2.sock 00:05:18.723 20:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63095 ']' 00:05:18.723 20:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:18.723 20:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.723 20:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.723 20:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.723 20:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.723 20:21:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.723 [2024-07-15 20:21:40.084230] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:18.723 [2024-07-15 20:21:40.084529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63095 ] 00:05:18.981 [2024-07-15 20:21:40.229661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.981 [2024-07-15 20:21:40.349192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.914 20:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.914 20:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:19.914 20:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 63095 00:05:19.914 20:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63095 00:05:19.915 20:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.481 20:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63075 00:05:20.481 20:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63075 ']' 00:05:20.481 20:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63075 00:05:20.481 20:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:20.481 20:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.481 20:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63075 00:05:20.739 20:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:20.739 killing process with pid 63075 00:05:20.739 20:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:20.739 20:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63075' 00:05:20.739 20:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63075 00:05:20.739 20:21:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63075 00:05:20.997 20:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 63095 00:05:20.997 20:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63095 ']' 00:05:20.997 20:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 63095 00:05:20.997 20:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:20.997 20:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.997 20:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63095 00:05:20.997 killing process with pid 63095 00:05:20.997 20:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:20.997 20:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:20.997 20:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63095' 00:05:20.998 20:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 63095 00:05:20.998 20:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 63095 00:05:21.256 ************************************ 00:05:21.256 END TEST locking_app_on_unlocked_coremask 00:05:21.256 ************************************ 00:05:21.256 00:05:21.256 real 0m3.130s 00:05:21.256 user 0m3.735s 00:05:21.256 sys 0m0.890s 00:05:21.256 20:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.256 20:21:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.514 20:21:42 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:21.515 20:21:42 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:21.515 20:21:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.515 20:21:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.515 20:21:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.515 ************************************ 00:05:21.515 START TEST locking_app_on_locked_coremask 00:05:21.515 ************************************ 00:05:21.515 20:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:21.515 20:21:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63163 00:05:21.515 20:21:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.515 20:21:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 63163 /var/tmp/spdk.sock 00:05:21.515 20:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63163 ']' 00:05:21.515 20:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.515 20:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.515 20:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.515 20:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.515 20:21:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.515 [2024-07-15 20:21:42.823544] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:21.515 [2024-07-15 20:21:42.823955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63163 ] 00:05:21.515 [2024-07-15 20:21:42.959514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.773 [2024-07-15 20:21:43.032418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.773 20:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.773 20:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:21.773 20:21:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63183 00:05:21.773 20:21:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:21.773 20:21:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63183 /var/tmp/spdk2.sock 00:05:21.773 20:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:21.773 20:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63183 /var/tmp/spdk2.sock 00:05:21.773 20:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:21.773 20:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.773 20:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:21.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.773 20:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.773 20:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63183 /var/tmp/spdk2.sock 00:05:21.773 20:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 63183 ']' 00:05:21.773 20:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.773 20:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:21.773 20:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.773 20:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:21.773 20:21:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.031 [2024-07-15 20:21:43.282512] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:22.031 [2024-07-15 20:21:43.282633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63183 ] 00:05:22.031 [2024-07-15 20:21:43.429354] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63163 has claimed it. 00:05:22.031 [2024-07-15 20:21:43.429439] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:22.596 ERROR: process (pid: 63183) is no longer running 00:05:22.596 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63183) - No such process 00:05:22.596 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.596 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:22.596 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:22.596 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.596 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:22.596 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.596 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 63163 00:05:22.596 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63163 00:05:22.596 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.164 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 63163 00:05:23.164 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 63163 ']' 00:05:23.164 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 63163 00:05:23.164 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:23.164 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.164 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63163 00:05:23.164 killing process with pid 63163 00:05:23.164 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.164 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.164 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63163' 00:05:23.164 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 63163 00:05:23.164 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 63163 00:05:23.445 00:05:23.445 real 0m1.954s 00:05:23.445 user 0m2.339s 00:05:23.445 sys 0m0.485s 00:05:23.445 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.445 ************************************ 00:05:23.445 END TEST locking_app_on_locked_coremask 00:05:23.445 ************************************ 00:05:23.445 20:21:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.445 20:21:44 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:23.445 20:21:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:23.445 20:21:44 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.445 20:21:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.445 20:21:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.445 ************************************ 00:05:23.445 START TEST locking_overlapped_coremask 00:05:23.445 ************************************ 00:05:23.445 20:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:23.445 20:21:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63229 00:05:23.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.445 20:21:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 63229 /var/tmp/spdk.sock 00:05:23.445 20:21:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:23.445 20:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63229 ']' 00:05:23.445 20:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.445 20:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.445 20:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.445 20:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.445 20:21:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.445 [2024-07-15 20:21:44.828799] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:23.445 [2024-07-15 20:21:44.828899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63229 ] 00:05:23.743 [2024-07-15 20:21:44.965602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.743 [2024-07-15 20:21:45.035824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.743 [2024-07-15 20:21:45.035956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.743 [2024-07-15 20:21:45.035961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.743 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.743 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:23.743 20:21:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:23.743 20:21:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63250 00:05:23.743 20:21:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63250 /var/tmp/spdk2.sock 00:05:23.743 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:23.743 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63250 /var/tmp/spdk2.sock 00:05:23.743 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:23.743 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.743 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:23.743 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.743 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63250 /var/tmp/spdk2.sock 00:05:23.743 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 63250 ']' 00:05:23.743 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.743 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.743 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.743 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.743 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.000 [2024-07-15 20:21:45.256090] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:24.000 [2024-07-15 20:21:45.256170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63250 ] 00:05:24.000 [2024-07-15 20:21:45.397862] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63229 has claimed it. 00:05:24.000 [2024-07-15 20:21:45.397938] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:24.565 ERROR: process (pid: 63250) is no longer running 00:05:24.565 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (63250) - No such process 00:05:24.565 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.565 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:24.565 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:24.565 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:24.565 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:24.565 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:24.565 20:21:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:24.566 20:21:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:24.566 20:21:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:24.566 20:21:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:24.566 20:21:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 63229 00:05:24.566 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 63229 ']' 00:05:24.566 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 63229 00:05:24.566 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:24.566 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.566 20:21:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63229 00:05:24.566 20:21:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.566 20:21:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.566 20:21:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63229' 00:05:24.566 killing process with pid 63229 00:05:24.566 20:21:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 63229 00:05:24.566 20:21:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 63229 00:05:24.823 00:05:24.823 real 0m1.489s 00:05:24.823 user 0m4.015s 00:05:24.823 sys 0m0.288s 00:05:24.823 20:21:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.823 20:21:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.823 ************************************ 00:05:24.823 END TEST locking_overlapped_coremask 00:05:24.823 ************************************ 00:05:24.823 20:21:46 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:24.823 20:21:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:24.823 20:21:46 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.823 20:21:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.823 20:21:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.823 ************************************ 00:05:24.823 START TEST locking_overlapped_coremask_via_rpc 00:05:24.823 ************************************ 00:05:24.823 20:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:24.823 20:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63297 00:05:24.823 20:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63297 /var/tmp/spdk.sock 00:05:24.823 20:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:24.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.823 20:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63297 ']' 00:05:24.823 20:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.823 20:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.823 20:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.823 20:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.823 20:21:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.081 [2024-07-15 20:21:46.384318] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:25.081 [2024-07-15 20:21:46.384445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63297 ] 00:05:25.081 [2024-07-15 20:21:46.528111] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:25.081 [2024-07-15 20:21:46.528169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:25.338 [2024-07-15 20:21:46.592478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.338 [2024-07-15 20:21:46.592584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.338 [2024-07-15 20:21:46.592590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.903 20:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.903 20:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:25.903 20:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63326 00:05:25.903 20:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:25.903 20:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63326 /var/tmp/spdk2.sock 00:05:25.903 20:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63326 ']' 00:05:25.903 20:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.903 20:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.903 20:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.903 20:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.903 20:21:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.903 [2024-07-15 20:21:47.382086] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:25.903 [2024-07-15 20:21:47.382193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63326 ] 00:05:26.209 [2024-07-15 20:21:47.530228] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.209 [2024-07-15 20:21:47.530278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.209 [2024-07-15 20:21:47.652383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.209 [2024-07-15 20:21:47.656005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:26.209 [2024-07-15 20:21:47.656006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.143 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.144 [2024-07-15 20:21:48.436086] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63297 has claimed it. 00:05:27.144 2024/07/15 20:21:48 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:05:27.144 request: 00:05:27.144 { 00:05:27.144 "method": "framework_enable_cpumask_locks", 00:05:27.144 "params": {} 00:05:27.144 } 00:05:27.144 Got JSON-RPC error response 00:05:27.144 GoRPCClient: error on JSON-RPC call 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63297 /var/tmp/spdk.sock 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63297 ']' 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.144 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.401 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.401 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:27.401 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63326 /var/tmp/spdk2.sock 00:05:27.401 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 63326 ']' 00:05:27.401 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.401 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.401 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.401 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.401 20:21:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.658 20:21:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.658 20:21:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:27.658 20:21:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:27.658 20:21:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:27.658 20:21:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:27.658 20:21:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:27.658 00:05:27.658 real 0m2.741s 00:05:27.658 user 0m1.480s 00:05:27.658 sys 0m0.189s 00:05:27.659 20:21:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.659 20:21:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.659 ************************************ 00:05:27.659 END TEST locking_overlapped_coremask_via_rpc 00:05:27.659 ************************************ 00:05:27.659 20:21:49 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:27.659 20:21:49 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:27.659 20:21:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63297 ]] 00:05:27.659 20:21:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63297 00:05:27.659 20:21:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63297 ']' 00:05:27.659 20:21:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63297 00:05:27.659 20:21:49 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:27.659 20:21:49 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.659 20:21:49 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63297 00:05:27.659 20:21:49 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.659 20:21:49 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.659 killing process with pid 63297 00:05:27.659 20:21:49 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63297' 00:05:27.659 20:21:49 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63297 00:05:27.659 20:21:49 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63297 00:05:27.916 20:21:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63326 ]] 00:05:27.916 20:21:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63326 00:05:27.916 20:21:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63326 ']' 00:05:27.916 20:21:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63326 00:05:27.916 20:21:49 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:27.917 20:21:49 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.917 20:21:49 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63326 00:05:27.917 20:21:49 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:27.917 20:21:49 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:27.917 killing process with pid 63326 00:05:27.917 20:21:49 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63326' 00:05:27.917 20:21:49 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 63326 00:05:27.917 20:21:49 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 63326 00:05:28.174 20:21:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:28.174 20:21:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:28.174 20:21:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63297 ]] 00:05:28.174 20:21:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63297 00:05:28.174 20:21:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63297 ']' 00:05:28.174 20:21:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63297 00:05:28.174 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63297) - No such process 00:05:28.174 Process with pid 63297 is not found 00:05:28.174 20:21:49 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63297 is not found' 00:05:28.174 20:21:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63326 ]] 00:05:28.174 20:21:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63326 00:05:28.174 20:21:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 63326 ']' 00:05:28.174 20:21:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 63326 00:05:28.174 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (63326) - No such process 00:05:28.174 Process with pid 63326 is not found 00:05:28.174 20:21:49 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 63326 is not found' 00:05:28.174 20:21:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:28.174 00:05:28.174 real 0m17.044s 00:05:28.174 user 0m31.684s 00:05:28.174 sys 0m4.256s 00:05:28.174 20:21:49 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.174 20:21:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.174 ************************************ 00:05:28.174 END TEST cpu_locks 00:05:28.174 ************************************ 00:05:28.432 20:21:49 event -- common/autotest_common.sh@1142 -- # return 0 00:05:28.432 00:05:28.432 real 0m44.133s 00:05:28.432 user 1m27.447s 00:05:28.432 sys 0m7.763s 00:05:28.432 20:21:49 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.432 20:21:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.432 ************************************ 00:05:28.432 END TEST event 00:05:28.432 ************************************ 00:05:28.432 20:21:49 -- common/autotest_common.sh@1142 -- # return 0 00:05:28.432 20:21:49 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:28.432 20:21:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.432 20:21:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.432 20:21:49 -- common/autotest_common.sh@10 -- # set +x 00:05:28.432 ************************************ 00:05:28.432 START TEST thread 00:05:28.432 ************************************ 00:05:28.432 20:21:49 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:28.432 * Looking for test storage... 00:05:28.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:28.432 20:21:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:28.432 20:21:49 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:28.432 20:21:49 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.432 20:21:49 thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.432 ************************************ 00:05:28.432 START TEST thread_poller_perf 00:05:28.432 ************************************ 00:05:28.432 20:21:49 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:28.432 [2024-07-15 20:21:49.838780] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:28.432 [2024-07-15 20:21:49.839016] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63468 ] 00:05:28.690 [2024-07-15 20:21:49.979655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.690 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:28.690 [2024-07-15 20:21:50.050604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.648 ====================================== 00:05:29.648 busy:2208611915 (cyc) 00:05:29.648 total_run_count: 277000 00:05:29.648 tsc_hz: 2200000000 (cyc) 00:05:29.648 ====================================== 00:05:29.648 poller_cost: 7973 (cyc), 3624 (nsec) 00:05:29.648 00:05:29.648 real 0m1.308s 00:05:29.648 user 0m1.149s 00:05:29.648 sys 0m0.052s 00:05:29.648 20:21:51 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.648 20:21:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.648 ************************************ 00:05:29.648 END TEST thread_poller_perf 00:05:29.648 ************************************ 00:05:29.907 20:21:51 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:29.907 20:21:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:29.907 20:21:51 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:29.907 20:21:51 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.907 20:21:51 thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.907 ************************************ 00:05:29.907 START TEST thread_poller_perf 00:05:29.907 ************************************ 00:05:29.907 20:21:51 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:29.907 [2024-07-15 20:21:51.196272] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:29.907 [2024-07-15 20:21:51.196379] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63504 ] 00:05:29.907 [2024-07-15 20:21:51.329058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.907 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:29.907 [2024-07-15 20:21:51.393803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.282 ====================================== 00:05:31.282 busy:2202273317 (cyc) 00:05:31.282 total_run_count: 3737000 00:05:31.282 tsc_hz: 2200000000 (cyc) 00:05:31.282 ====================================== 00:05:31.282 poller_cost: 589 (cyc), 267 (nsec) 00:05:31.282 00:05:31.282 real 0m1.290s 00:05:31.282 user 0m1.139s 00:05:31.282 sys 0m0.044s 00:05:31.282 20:21:52 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.282 20:21:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.282 ************************************ 00:05:31.282 END TEST thread_poller_perf 00:05:31.282 ************************************ 00:05:31.282 20:21:52 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:31.282 20:21:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:31.282 00:05:31.282 real 0m2.783s 00:05:31.282 user 0m2.355s 00:05:31.282 sys 0m0.210s 00:05:31.282 20:21:52 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.282 20:21:52 thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.282 ************************************ 00:05:31.282 END TEST thread 00:05:31.282 ************************************ 00:05:31.282 20:21:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.282 20:21:52 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:31.282 20:21:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.282 20:21:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.282 20:21:52 -- common/autotest_common.sh@10 -- # set +x 00:05:31.282 ************************************ 00:05:31.282 START TEST accel 00:05:31.282 ************************************ 00:05:31.282 20:21:52 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:31.282 * Looking for test storage... 00:05:31.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:31.282 20:21:52 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:31.282 20:21:52 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:31.282 20:21:52 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:31.282 20:21:52 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=63578 00:05:31.282 20:21:52 accel -- accel/accel.sh@63 -- # waitforlisten 63578 00:05:31.282 20:21:52 accel -- common/autotest_common.sh@829 -- # '[' -z 63578 ']' 00:05:31.282 20:21:52 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.282 20:21:52 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.282 20:21:52 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:31.282 20:21:52 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.282 20:21:52 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:31.282 20:21:52 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.282 20:21:52 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.282 20:21:52 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.282 20:21:52 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.282 20:21:52 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.282 20:21:52 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:31.282 20:21:52 accel -- accel/accel.sh@41 -- # jq -r . 00:05:31.282 20:21:52 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.282 20:21:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.282 [2024-07-15 20:21:52.704601] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:31.282 [2024-07-15 20:21:52.704700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63578 ] 00:05:31.540 [2024-07-15 20:21:52.841307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.540 [2024-07-15 20:21:52.901331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.802 20:21:53 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.802 20:21:53 accel -- common/autotest_common.sh@862 -- # return 0 00:05:31.802 20:21:53 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:31.802 20:21:53 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:31.802 20:21:53 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:31.802 20:21:53 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:31.802 20:21:53 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:31.802 20:21:53 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:31.802 20:21:53 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.802 20:21:53 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:31.802 20:21:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.802 20:21:53 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.802 20:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:31.802 20:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:31.802 20:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:31.802 20:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:31.802 20:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:31.802 20:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:31.802 20:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:31.802 20:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:31.802 20:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:31.802 20:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:31.802 20:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:31.802 20:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:31.802 20:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:31.802 20:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:31.802 20:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:31.802 20:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:31.802 20:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:31.802 20:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:31.802 20:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:31.802 20:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:31.802 20:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:31.802 20:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:31.802 20:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:31.802 20:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:31.802 20:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:31.802 20:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:31.802 20:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:31.802 20:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:31.802 20:21:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:31.802 20:21:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:31.802 20:21:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:31.802 20:21:53 accel -- accel/accel.sh@75 -- # killprocess 63578 00:05:31.802 20:21:53 accel -- common/autotest_common.sh@948 -- # '[' -z 63578 ']' 00:05:31.802 20:21:53 accel -- common/autotest_common.sh@952 -- # kill -0 63578 00:05:31.803 20:21:53 accel -- common/autotest_common.sh@953 -- # uname 00:05:31.803 20:21:53 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.803 20:21:53 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63578 00:05:31.803 20:21:53 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.803 killing process with pid 63578 00:05:31.803 20:21:53 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.803 20:21:53 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63578' 00:05:31.803 20:21:53 accel -- common/autotest_common.sh@967 -- # kill 63578 00:05:31.803 20:21:53 accel -- common/autotest_common.sh@972 -- # wait 63578 00:05:32.059 20:21:53 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:32.059 20:21:53 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:32.059 20:21:53 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:32.059 20:21:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.059 20:21:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.059 20:21:53 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:32.059 20:21:53 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:32.059 20:21:53 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:32.059 20:21:53 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.059 20:21:53 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.059 20:21:53 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.059 20:21:53 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.059 20:21:53 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.059 20:21:53 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:32.059 20:21:53 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:32.059 20:21:53 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.059 20:21:53 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:32.059 20:21:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:32.059 20:21:53 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:32.059 20:21:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:32.059 20:21:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.059 20:21:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.059 ************************************ 00:05:32.059 START TEST accel_missing_filename 00:05:32.059 ************************************ 00:05:32.059 20:21:53 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:32.059 20:21:53 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:32.059 20:21:53 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:32.059 20:21:53 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:32.059 20:21:53 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.059 20:21:53 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:32.059 20:21:53 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.059 20:21:53 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:32.059 20:21:53 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:32.059 20:21:53 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:32.059 20:21:53 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.059 20:21:53 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.059 20:21:53 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.059 20:21:53 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.059 20:21:53 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.059 20:21:53 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:32.059 20:21:53 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:32.059 [2024-07-15 20:21:53.512390] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:32.059 [2024-07-15 20:21:53.512497] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63629 ] 00:05:32.317 [2024-07-15 20:21:53.653171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.317 [2024-07-15 20:21:53.723573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.317 [2024-07-15 20:21:53.757447] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:32.317 [2024-07-15 20:21:53.799261] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:32.575 A filename is required. 00:05:32.575 20:21:53 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:32.575 20:21:53 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:32.575 20:21:53 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:32.575 20:21:53 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:32.575 20:21:53 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:32.575 20:21:53 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:32.575 00:05:32.575 real 0m0.394s 00:05:32.575 user 0m0.245s 00:05:32.575 sys 0m0.090s 00:05:32.575 20:21:53 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.575 ************************************ 00:05:32.575 END TEST accel_missing_filename 00:05:32.575 ************************************ 00:05:32.575 20:21:53 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:32.575 20:21:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:32.575 20:21:53 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:32.575 20:21:53 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:32.575 20:21:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.575 20:21:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.575 ************************************ 00:05:32.575 START TEST accel_compress_verify 00:05:32.575 ************************************ 00:05:32.575 20:21:53 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:32.575 20:21:53 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:32.575 20:21:53 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:32.575 20:21:53 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:32.575 20:21:53 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.575 20:21:53 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:32.575 20:21:53 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.575 20:21:53 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:32.575 20:21:53 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:32.575 20:21:53 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:32.575 20:21:53 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:32.575 20:21:53 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:32.575 20:21:53 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.575 20:21:53 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.575 20:21:53 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:32.575 20:21:53 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:32.575 20:21:53 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:32.575 [2024-07-15 20:21:53.954373] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:32.575 [2024-07-15 20:21:53.954460] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63653 ] 00:05:32.834 [2024-07-15 20:21:54.089480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.834 [2024-07-15 20:21:54.149532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.834 [2024-07-15 20:21:54.181626] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:32.834 [2024-07-15 20:21:54.224088] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:32.834 00:05:32.834 Compression does not support the verify option, aborting. 00:05:32.834 20:21:54 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:32.834 20:21:54 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:32.834 20:21:54 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:32.834 20:21:54 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:32.834 20:21:54 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:32.834 20:21:54 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:32.834 00:05:32.834 real 0m0.369s 00:05:32.834 user 0m0.231s 00:05:32.834 sys 0m0.082s 00:05:32.834 20:21:54 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.834 20:21:54 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:32.834 ************************************ 00:05:32.834 END TEST accel_compress_verify 00:05:32.834 ************************************ 00:05:32.834 20:21:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:32.834 20:21:54 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:32.834 20:21:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:32.834 20:21:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.834 20:21:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.093 ************************************ 00:05:33.093 START TEST accel_wrong_workload 00:05:33.093 ************************************ 00:05:33.093 20:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:33.093 20:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:33.093 20:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:33.093 20:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:33.093 20:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.093 20:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:33.093 20:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.093 20:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:33.093 20:21:54 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:33.093 20:21:54 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:33.093 20:21:54 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.093 20:21:54 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.093 20:21:54 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.093 20:21:54 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.093 20:21:54 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.093 20:21:54 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:33.093 20:21:54 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:33.093 Unsupported workload type: foobar 00:05:33.093 [2024-07-15 20:21:54.363888] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:33.093 accel_perf options: 00:05:33.094 [-h help message] 00:05:33.094 [-q queue depth per core] 00:05:33.094 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:33.094 [-T number of threads per core 00:05:33.094 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:33.094 [-t time in seconds] 00:05:33.094 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:33.094 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:33.094 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:33.094 [-l for compress/decompress workloads, name of uncompressed input file 00:05:33.094 [-S for crc32c workload, use this seed value (default 0) 00:05:33.094 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:33.094 [-f for fill workload, use this BYTE value (default 255) 00:05:33.094 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:33.094 [-y verify result if this switch is on] 00:05:33.094 [-a tasks to allocate per core (default: same value as -q)] 00:05:33.094 Can be used to spread operations across a wider range of memory. 00:05:33.094 20:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:33.094 20:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:33.094 20:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:33.094 20:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:33.094 00:05:33.094 real 0m0.030s 00:05:33.094 user 0m0.021s 00:05:33.094 sys 0m0.009s 00:05:33.094 ************************************ 00:05:33.094 END TEST accel_wrong_workload 00:05:33.094 ************************************ 00:05:33.094 20:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.094 20:21:54 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:33.094 20:21:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.094 20:21:54 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:33.094 20:21:54 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:33.094 20:21:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.094 20:21:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.094 ************************************ 00:05:33.094 START TEST accel_negative_buffers 00:05:33.094 ************************************ 00:05:33.094 20:21:54 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:33.094 20:21:54 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:33.094 20:21:54 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:33.094 20:21:54 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:33.094 20:21:54 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.094 20:21:54 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:33.094 20:21:54 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:33.094 20:21:54 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:33.094 20:21:54 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:33.094 20:21:54 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:33.094 20:21:54 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.094 20:21:54 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.094 20:21:54 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.094 20:21:54 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.094 20:21:54 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.094 20:21:54 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:33.094 20:21:54 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:33.094 -x option must be non-negative. 00:05:33.094 [2024-07-15 20:21:54.441152] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:33.094 accel_perf options: 00:05:33.094 [-h help message] 00:05:33.094 [-q queue depth per core] 00:05:33.094 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:33.094 [-T number of threads per core 00:05:33.094 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:33.094 [-t time in seconds] 00:05:33.094 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:33.094 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:33.094 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:33.094 [-l for compress/decompress workloads, name of uncompressed input file 00:05:33.094 [-S for crc32c workload, use this seed value (default 0) 00:05:33.094 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:33.094 [-f for fill workload, use this BYTE value (default 255) 00:05:33.094 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:33.094 [-y verify result if this switch is on] 00:05:33.094 [-a tasks to allocate per core (default: same value as -q)] 00:05:33.094 Can be used to spread operations across a wider range of memory. 00:05:33.094 20:21:54 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:33.094 20:21:54 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:33.094 20:21:54 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:33.094 20:21:54 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:33.094 00:05:33.094 real 0m0.034s 00:05:33.094 user 0m0.020s 00:05:33.094 sys 0m0.014s 00:05:33.094 20:21:54 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.094 ************************************ 00:05:33.094 END TEST accel_negative_buffers 00:05:33.094 ************************************ 00:05:33.094 20:21:54 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:33.094 20:21:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.094 20:21:54 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:33.094 20:21:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:33.094 20:21:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.094 20:21:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.094 ************************************ 00:05:33.094 START TEST accel_crc32c 00:05:33.094 ************************************ 00:05:33.094 20:21:54 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:33.094 20:21:54 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:33.094 20:21:54 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:33.094 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.094 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.094 20:21:54 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:33.094 20:21:54 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:33.094 20:21:54 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:33.094 20:21:54 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.094 20:21:54 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.094 20:21:54 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.094 20:21:54 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.094 20:21:54 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.094 20:21:54 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:33.094 20:21:54 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:33.094 [2024-07-15 20:21:54.512217] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:33.095 [2024-07-15 20:21:54.512315] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63712 ] 00:05:33.353 [2024-07-15 20:21:54.645776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.353 [2024-07-15 20:21:54.717615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 20:21:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:34.727 20:21:55 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.727 00:05:34.727 real 0m1.388s 00:05:34.727 user 0m1.215s 00:05:34.727 sys 0m0.077s 00:05:34.727 ************************************ 00:05:34.727 END TEST accel_crc32c 00:05:34.727 ************************************ 00:05:34.727 20:21:55 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.727 20:21:55 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:34.727 20:21:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.727 20:21:55 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:34.727 20:21:55 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:34.727 20:21:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.727 20:21:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.727 ************************************ 00:05:34.727 START TEST accel_crc32c_C2 00:05:34.727 ************************************ 00:05:34.727 20:21:55 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:34.727 20:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:34.727 20:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:34.727 20:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.727 20:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.727 20:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:34.727 20:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:34.727 20:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:34.727 20:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.727 20:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.727 20:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.727 20:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.727 20:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.727 20:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:34.727 20:21:55 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:34.727 [2024-07-15 20:21:55.956760] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:34.727 [2024-07-15 20:21:55.956929] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63746 ] 00:05:34.727 [2024-07-15 20:21:56.096221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.727 [2024-07-15 20:21:56.172955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.727 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.727 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.727 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.727 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.727 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.727 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.728 20:21:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.102 00:05:36.102 real 0m1.404s 00:05:36.102 user 0m1.224s 00:05:36.102 sys 0m0.081s 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.102 20:21:57 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:36.102 ************************************ 00:05:36.102 END TEST accel_crc32c_C2 00:05:36.102 ************************************ 00:05:36.102 20:21:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:36.102 20:21:57 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:36.102 20:21:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:36.102 20:21:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.102 20:21:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.102 ************************************ 00:05:36.102 START TEST accel_copy 00:05:36.102 ************************************ 00:05:36.102 20:21:57 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:36.102 20:21:57 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:36.102 20:21:57 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:36.102 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.102 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.102 20:21:57 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:36.102 20:21:57 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:36.102 20:21:57 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:36.102 20:21:57 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.102 20:21:57 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.102 20:21:57 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.102 20:21:57 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.102 20:21:57 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.102 20:21:57 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:36.102 20:21:57 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:36.102 [2024-07-15 20:21:57.403839] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:36.102 [2024-07-15 20:21:57.403943] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63781 ] 00:05:36.102 [2024-07-15 20:21:57.543074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.361 [2024-07-15 20:21:57.614956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:36.361 20:21:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:37.296 20:21:58 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.296 00:05:37.296 real 0m1.387s 00:05:37.296 user 0m1.222s 00:05:37.296 sys 0m0.069s 00:05:37.296 20:21:58 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.296 ************************************ 00:05:37.296 END TEST accel_copy 00:05:37.296 ************************************ 00:05:37.296 20:21:58 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:37.554 20:21:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.554 20:21:58 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:37.554 20:21:58 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:37.554 20:21:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.554 20:21:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.554 ************************************ 00:05:37.554 START TEST accel_fill 00:05:37.554 ************************************ 00:05:37.554 20:21:58 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:37.554 20:21:58 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:37.554 20:21:58 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:37.554 20:21:58 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.554 20:21:58 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.554 20:21:58 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:37.554 20:21:58 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:37.554 20:21:58 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:37.554 20:21:58 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.554 20:21:58 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.554 20:21:58 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.554 20:21:58 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.554 20:21:58 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.554 20:21:58 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:37.554 20:21:58 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:37.554 [2024-07-15 20:21:58.838340] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:37.554 [2024-07-15 20:21:58.838426] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63814 ] 00:05:37.554 [2024-07-15 20:21:58.974402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.554 [2024-07-15 20:21:59.043620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.812 20:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.812 20:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.812 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.812 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.812 20:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.812 20:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.812 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.812 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:37.813 20:21:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:38.747 20:22:00 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.747 00:05:38.747 real 0m1.380s 00:05:38.747 user 0m1.205s 00:05:38.747 sys 0m0.082s 00:05:38.747 20:22:00 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.747 20:22:00 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:38.747 ************************************ 00:05:38.747 END TEST accel_fill 00:05:38.747 ************************************ 00:05:38.747 20:22:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:38.747 20:22:00 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:38.747 20:22:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:38.747 20:22:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.747 20:22:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.747 ************************************ 00:05:38.747 START TEST accel_copy_crc32c 00:05:38.747 ************************************ 00:05:38.747 20:22:00 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:38.747 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:38.747 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:38.747 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.747 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.747 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:38.748 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:38.748 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:38.748 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.748 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.748 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.748 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.748 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.748 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:38.748 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:39.006 [2024-07-15 20:22:00.258553] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:39.006 [2024-07-15 20:22:00.258644] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63844 ] 00:05:39.006 [2024-07-15 20:22:00.394462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.006 [2024-07-15 20:22:00.454354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.006 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.007 20:22:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.380 00:05:40.380 real 0m1.365s 00:05:40.380 user 0m1.200s 00:05:40.380 sys 0m0.073s 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.380 ************************************ 00:05:40.380 END TEST accel_copy_crc32c 00:05:40.380 20:22:01 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:40.380 ************************************ 00:05:40.380 20:22:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.380 20:22:01 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:40.380 20:22:01 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:40.380 20:22:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.380 20:22:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.380 ************************************ 00:05:40.380 START TEST accel_copy_crc32c_C2 00:05:40.380 ************************************ 00:05:40.380 20:22:01 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:40.380 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:40.380 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:40.380 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.380 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.380 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:40.380 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:40.380 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.380 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.380 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.380 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.380 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.380 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.380 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:40.380 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:40.380 [2024-07-15 20:22:01.685834] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:40.380 [2024-07-15 20:22:01.686024] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63879 ] 00:05:40.380 [2024-07-15 20:22:01.834447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.640 [2024-07-15 20:22:01.905174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:40.640 20:22:01 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.577 00:05:41.577 real 0m1.416s 00:05:41.577 user 0m1.239s 00:05:41.577 sys 0m0.086s 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.577 20:22:03 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:41.577 ************************************ 00:05:41.577 END TEST accel_copy_crc32c_C2 00:05:41.577 ************************************ 00:05:41.835 20:22:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:41.835 20:22:03 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:41.835 20:22:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:41.835 20:22:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.835 20:22:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.835 ************************************ 00:05:41.835 START TEST accel_dualcast 00:05:41.835 ************************************ 00:05:41.835 20:22:03 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:41.835 20:22:03 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:41.835 20:22:03 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:41.835 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:41.835 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:41.835 20:22:03 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:41.835 20:22:03 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:41.835 20:22:03 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:41.835 20:22:03 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.835 20:22:03 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.836 20:22:03 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.836 20:22:03 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.836 20:22:03 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.836 20:22:03 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:41.836 20:22:03 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:41.836 [2024-07-15 20:22:03.132724] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:41.836 [2024-07-15 20:22:03.132819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63913 ] 00:05:41.836 [2024-07-15 20:22:03.272463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.836 [2024-07-15 20:22:03.335287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:42.094 20:22:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:43.038 20:22:04 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.038 00:05:43.038 real 0m1.386s 00:05:43.038 user 0m1.215s 00:05:43.038 sys 0m0.079s 00:05:43.038 20:22:04 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.038 20:22:04 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:43.038 ************************************ 00:05:43.038 END TEST accel_dualcast 00:05:43.038 ************************************ 00:05:43.038 20:22:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:43.038 20:22:04 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:43.296 20:22:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:43.296 20:22:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.296 20:22:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.296 ************************************ 00:05:43.296 START TEST accel_compare 00:05:43.296 ************************************ 00:05:43.296 20:22:04 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:43.296 20:22:04 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:43.296 20:22:04 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:43.296 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.296 20:22:04 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:43.296 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.296 20:22:04 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:43.296 20:22:04 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:43.296 20:22:04 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.296 20:22:04 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.296 20:22:04 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.296 20:22:04 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.296 20:22:04 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.296 20:22:04 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:43.296 20:22:04 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:43.296 [2024-07-15 20:22:04.571185] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:43.296 [2024-07-15 20:22:04.571271] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63950 ] 00:05:43.296 [2024-07-15 20:22:04.712314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.296 [2024-07-15 20:22:04.785812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:43.555 20:22:04 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:44.489 20:22:05 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.489 00:05:44.489 real 0m1.398s 00:05:44.489 user 0m1.226s 00:05:44.489 sys 0m0.079s 00:05:44.489 20:22:05 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.489 20:22:05 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:44.489 ************************************ 00:05:44.489 END TEST accel_compare 00:05:44.489 ************************************ 00:05:44.489 20:22:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:44.489 20:22:05 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:44.489 20:22:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:44.489 20:22:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.489 20:22:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.748 ************************************ 00:05:44.748 START TEST accel_xor 00:05:44.748 ************************************ 00:05:44.748 20:22:05 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:44.748 20:22:05 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:44.748 20:22:05 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:44.748 20:22:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:44.748 20:22:05 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:44.748 20:22:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:44.748 20:22:05 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:44.748 20:22:05 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:44.748 20:22:05 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.748 20:22:05 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.748 20:22:05 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.748 20:22:05 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.748 20:22:05 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.748 20:22:05 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:44.748 20:22:05 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:44.748 [2024-07-15 20:22:06.011654] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:44.748 [2024-07-15 20:22:06.011748] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63979 ] 00:05:44.748 [2024-07-15 20:22:06.152635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.748 [2024-07-15 20:22:06.230469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.005 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.006 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.006 20:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:45.006 20:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.006 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.006 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.006 20:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.006 20:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.006 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.006 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.006 20:22:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.006 20:22:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.006 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.006 20:22:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:45.937 20:22:07 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.937 00:05:45.937 real 0m1.399s 00:05:45.937 user 0m1.214s 00:05:45.937 sys 0m0.091s 00:05:45.937 20:22:07 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.937 ************************************ 00:05:45.937 END TEST accel_xor 00:05:45.937 ************************************ 00:05:45.937 20:22:07 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:45.937 20:22:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.937 20:22:07 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:45.937 20:22:07 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:45.937 20:22:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.937 20:22:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.195 ************************************ 00:05:46.195 START TEST accel_xor 00:05:46.195 ************************************ 00:05:46.195 20:22:07 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:46.195 [2024-07-15 20:22:07.465566] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:46.195 [2024-07-15 20:22:07.465668] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64019 ] 00:05:46.195 [2024-07-15 20:22:07.602725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.195 [2024-07-15 20:22:07.657605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.195 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:46.453 20:22:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:47.388 20:22:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.388 00:05:47.388 real 0m1.362s 00:05:47.388 user 0m1.191s 00:05:47.388 sys 0m0.077s 00:05:47.388 20:22:08 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.388 ************************************ 00:05:47.388 END TEST accel_xor 00:05:47.388 ************************************ 00:05:47.388 20:22:08 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:47.388 20:22:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.388 20:22:08 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:47.388 20:22:08 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:47.388 20:22:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.388 20:22:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.388 ************************************ 00:05:47.388 START TEST accel_dif_verify 00:05:47.388 ************************************ 00:05:47.388 20:22:08 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:47.388 20:22:08 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:47.388 20:22:08 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:47.388 20:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.388 20:22:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.388 20:22:08 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:47.388 20:22:08 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:47.388 20:22:08 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:47.388 20:22:08 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.388 20:22:08 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.388 20:22:08 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.388 20:22:08 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.388 20:22:08 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.388 20:22:08 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:47.388 20:22:08 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:47.388 [2024-07-15 20:22:08.879118] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:47.388 [2024-07-15 20:22:08.879205] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64048 ] 00:05:47.645 [2024-07-15 20:22:09.016822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.645 [2024-07-15 20:22:09.070548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:47.645 20:22:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.022 ************************************ 00:05:49.022 END TEST accel_dif_verify 00:05:49.022 ************************************ 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:49.022 20:22:10 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.022 00:05:49.022 real 0m1.361s 00:05:49.022 user 0m1.197s 00:05:49.022 sys 0m0.074s 00:05:49.022 20:22:10 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.022 20:22:10 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:49.022 20:22:10 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.022 20:22:10 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:49.022 20:22:10 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:49.022 20:22:10 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.022 20:22:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.022 ************************************ 00:05:49.022 START TEST accel_dif_generate 00:05:49.022 ************************************ 00:05:49.022 20:22:10 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:49.022 20:22:10 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:49.022 20:22:10 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:49.022 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.022 20:22:10 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:49.023 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.023 20:22:10 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:49.023 20:22:10 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:49.023 20:22:10 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.023 20:22:10 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.023 20:22:10 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.023 20:22:10 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.023 20:22:10 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.023 20:22:10 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:49.023 20:22:10 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:49.023 [2024-07-15 20:22:10.289100] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:49.023 [2024-07-15 20:22:10.289187] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64077 ] 00:05:49.023 [2024-07-15 20:22:10.427519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.023 [2024-07-15 20:22:10.490224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.281 20:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:49.282 20:22:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:50.218 ************************************ 00:05:50.218 END TEST accel_dif_generate 00:05:50.218 ************************************ 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:50.218 20:22:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.218 00:05:50.218 real 0m1.372s 00:05:50.218 user 0m1.208s 00:05:50.218 sys 0m0.073s 00:05:50.218 20:22:11 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.218 20:22:11 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:50.218 20:22:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:50.218 20:22:11 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:50.218 20:22:11 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:50.218 20:22:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.218 20:22:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.218 ************************************ 00:05:50.218 START TEST accel_dif_generate_copy 00:05:50.218 ************************************ 00:05:50.218 20:22:11 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:50.218 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:50.218 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:50.218 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.218 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.218 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:50.218 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:50.218 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:50.218 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.218 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.218 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.218 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.218 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.218 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:50.218 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:50.218 [2024-07-15 20:22:11.710203] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:50.218 [2024-07-15 20:22:11.710317] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64118 ] 00:05:50.477 [2024-07-15 20:22:11.849688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.477 [2024-07-15 20:22:11.905675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:50.477 20:22:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.853 00:05:51.853 real 0m1.365s 00:05:51.853 user 0m1.194s 00:05:51.853 sys 0m0.076s 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.853 20:22:13 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:51.853 ************************************ 00:05:51.853 END TEST accel_dif_generate_copy 00:05:51.853 ************************************ 00:05:51.853 20:22:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:51.853 20:22:13 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:51.853 20:22:13 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:51.853 20:22:13 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:51.853 20:22:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.853 20:22:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.853 ************************************ 00:05:51.853 START TEST accel_comp 00:05:51.853 ************************************ 00:05:51.853 20:22:13 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:51.853 20:22:13 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:51.853 20:22:13 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:51.853 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:51.853 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:51.853 20:22:13 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:51.853 20:22:13 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:51.853 20:22:13 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:51.853 20:22:13 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.853 20:22:13 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.853 20:22:13 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.853 20:22:13 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.853 20:22:13 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.853 20:22:13 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:51.853 20:22:13 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:51.853 [2024-07-15 20:22:13.124938] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:51.853 [2024-07-15 20:22:13.125109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64148 ] 00:05:51.853 [2024-07-15 20:22:13.263799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.853 [2024-07-15 20:22:13.319681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:52.112 20:22:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:53.050 20:22:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.050 00:05:53.050 real 0m1.372s 00:05:53.050 user 0m1.203s 00:05:53.050 sys 0m0.076s 00:05:53.050 20:22:14 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.050 20:22:14 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:53.050 ************************************ 00:05:53.050 END TEST accel_comp 00:05:53.050 ************************************ 00:05:53.050 20:22:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.050 20:22:14 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.050 20:22:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:53.050 20:22:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.050 20:22:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.050 ************************************ 00:05:53.050 START TEST accel_decomp 00:05:53.050 ************************************ 00:05:53.050 20:22:14 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.050 20:22:14 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:53.050 20:22:14 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:53.050 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.050 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.050 20:22:14 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.050 20:22:14 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.050 20:22:14 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:53.050 20:22:14 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.050 20:22:14 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.050 20:22:14 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.050 20:22:14 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.050 20:22:14 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.050 20:22:14 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:53.050 20:22:14 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:53.050 [2024-07-15 20:22:14.540617] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:53.050 [2024-07-15 20:22:14.540728] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64177 ] 00:05:53.309 [2024-07-15 20:22:14.680158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.309 [2024-07-15 20:22:14.738791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:53.309 20:22:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:54.685 20:22:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.685 00:05:54.685 real 0m1.371s 00:05:54.685 user 0m1.203s 00:05:54.685 sys 0m0.076s 00:05:54.686 20:22:15 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.686 20:22:15 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:54.686 ************************************ 00:05:54.686 END TEST accel_decomp 00:05:54.686 ************************************ 00:05:54.686 20:22:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:54.686 20:22:15 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:54.686 20:22:15 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:54.686 20:22:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.686 20:22:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.686 ************************************ 00:05:54.686 START TEST accel_decomp_full 00:05:54.686 ************************************ 00:05:54.686 20:22:15 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:54.686 20:22:15 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:54.686 20:22:15 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:54.686 20:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.686 20:22:15 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:54.686 20:22:15 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.686 20:22:15 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:54.686 20:22:15 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:54.686 20:22:15 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.686 20:22:15 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.686 20:22:15 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.686 20:22:15 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.686 20:22:15 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.686 20:22:15 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:54.686 20:22:15 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:54.686 [2024-07-15 20:22:15.958201] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:54.686 [2024-07-15 20:22:15.958326] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64217 ] 00:05:54.686 [2024-07-15 20:22:16.098322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.686 [2024-07-15 20:22:16.156628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:54.944 20:22:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:55.901 ************************************ 00:05:55.901 END TEST accel_decomp_full 00:05:55.901 ************************************ 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:55.901 20:22:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.901 00:05:55.901 real 0m1.387s 00:05:55.901 user 0m1.215s 00:05:55.901 sys 0m0.078s 00:05:55.901 20:22:17 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.901 20:22:17 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:55.901 20:22:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.901 20:22:17 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:55.901 20:22:17 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:55.901 20:22:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.901 20:22:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.901 ************************************ 00:05:55.901 START TEST accel_decomp_mcore 00:05:55.901 ************************************ 00:05:55.901 20:22:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:55.901 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:55.901 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:55.901 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:55.901 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:55.901 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:55.901 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:55.901 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:55.901 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.901 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.901 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.901 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.901 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.901 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:55.901 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:55.901 [2024-07-15 20:22:17.389458] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:55.901 [2024-07-15 20:22:17.389586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64246 ] 00:05:56.160 [2024-07-15 20:22:17.527548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:56.160 [2024-07-15 20:22:17.589975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.160 [2024-07-15 20:22:17.590124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.160 [2024-07-15 20:22:17.590189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.160 [2024-07-15 20:22:17.590419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.160 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:56.161 20:22:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.537 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.537 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.537 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.537 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.537 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.537 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.537 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.537 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.537 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.537 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.537 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.537 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.537 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.537 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.537 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.537 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.537 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.537 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.537 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.538 00:05:57.538 real 0m1.397s 00:05:57.538 user 0m4.428s 00:05:57.538 sys 0m0.096s 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.538 20:22:18 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:57.538 ************************************ 00:05:57.538 END TEST accel_decomp_mcore 00:05:57.538 ************************************ 00:05:57.538 20:22:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.538 20:22:18 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:57.538 20:22:18 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:57.538 20:22:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.538 20:22:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.538 ************************************ 00:05:57.538 START TEST accel_decomp_full_mcore 00:05:57.538 ************************************ 00:05:57.538 20:22:18 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:57.538 20:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:57.538 20:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:57.538 20:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 20:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:57.538 20:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 20:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:57.538 20:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:57.538 20:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.538 20:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.538 20:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.538 20:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.538 20:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.538 20:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:57.538 20:22:18 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:57.538 [2024-07-15 20:22:18.831892] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:57.538 [2024-07-15 20:22:18.832004] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64288 ] 00:05:57.538 [2024-07-15 20:22:18.967820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.538 [2024-07-15 20:22:19.031814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.538 [2024-07-15 20:22:19.031930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.538 [2024-07-15 20:22:19.031985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.538 [2024-07-15 20:22:19.031993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:57.797 20:22:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.733 00:05:58.733 real 0m1.414s 00:05:58.733 user 0m4.554s 00:05:58.733 sys 0m0.092s 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.733 20:22:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:58.733 ************************************ 00:05:58.733 END TEST accel_decomp_full_mcore 00:05:58.733 ************************************ 00:05:58.993 20:22:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.993 20:22:20 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:58.993 20:22:20 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:58.993 20:22:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.993 20:22:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.993 ************************************ 00:05:58.993 START TEST accel_decomp_mthread 00:05:58.993 ************************************ 00:05:58.993 20:22:20 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:58.993 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:58.993 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:58.993 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:58.993 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:58.993 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:58.993 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:58.993 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:58.993 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.993 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.993 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.993 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.993 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.993 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:58.993 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:58.993 [2024-07-15 20:22:20.295049] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:05:58.993 [2024-07-15 20:22:20.295161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64321 ] 00:05:58.993 [2024-07-15 20:22:20.433267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.993 [2024-07-15 20:22:20.488050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.251 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:59.252 20:22:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.188 ************************************ 00:06:00.188 END TEST accel_decomp_mthread 00:06:00.188 ************************************ 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.188 00:06:00.188 real 0m1.368s 00:06:00.188 user 0m1.198s 00:06:00.188 sys 0m0.078s 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.188 20:22:21 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:00.188 20:22:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.188 20:22:21 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:00.188 20:22:21 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:00.188 20:22:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.188 20:22:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.447 ************************************ 00:06:00.447 START TEST accel_decomp_full_mthread 00:06:00.447 ************************************ 00:06:00.447 20:22:21 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:00.447 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:00.447 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:00.447 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.447 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:00.447 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.447 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:00.447 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:00.447 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.447 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.447 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.447 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.447 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.447 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:00.448 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:00.448 [2024-07-15 20:22:21.713983] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:00.448 [2024-07-15 20:22:21.714067] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64356 ] 00:06:00.448 [2024-07-15 20:22:21.850981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.448 [2024-07-15 20:22:21.907494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.448 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.448 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.448 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.448 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.448 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.448 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.448 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.448 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.448 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.448 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.448 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.448 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.448 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:00.448 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.448 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:00.707 20:22:21 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.641 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.641 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.641 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.641 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.641 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.642 00:06:01.642 real 0m1.408s 00:06:01.642 user 0m1.237s 00:06:01.642 sys 0m0.078s 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.642 ************************************ 00:06:01.642 END TEST accel_decomp_full_mthread 00:06:01.642 ************************************ 00:06:01.642 20:22:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:01.642 20:22:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:01.901 20:22:23 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:01.901 20:22:23 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:01.901 20:22:23 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:01.901 20:22:23 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.901 20:22:23 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:01.901 20:22:23 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.901 20:22:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.901 20:22:23 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.901 20:22:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.901 20:22:23 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.901 20:22:23 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.901 20:22:23 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:01.901 20:22:23 accel -- accel/accel.sh@41 -- # jq -r . 00:06:01.901 ************************************ 00:06:01.901 START TEST accel_dif_functional_tests 00:06:01.901 ************************************ 00:06:01.901 20:22:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:01.901 [2024-07-15 20:22:23.205751] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:01.901 [2024-07-15 20:22:23.205856] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64392 ] 00:06:01.901 [2024-07-15 20:22:23.342705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.159 [2024-07-15 20:22:23.403442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.159 [2024-07-15 20:22:23.403609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.159 [2024-07-15 20:22:23.403614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.159 00:06:02.159 00:06:02.159 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.159 http://cunit.sourceforge.net/ 00:06:02.159 00:06:02.159 00:06:02.159 Suite: accel_dif 00:06:02.159 Test: verify: DIF generated, GUARD check ...passed 00:06:02.159 Test: verify: DIF generated, APPTAG check ...passed 00:06:02.159 Test: verify: DIF generated, REFTAG check ...passed 00:06:02.159 Test: verify: DIF not generated, GUARD check ...passed 00:06:02.159 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 20:22:23.454076] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:02.159 passed 00:06:02.159 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 20:22:23.454157] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:02.159 [2024-07-15 20:22:23.454279] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:02.159 passed 00:06:02.159 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:02.159 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:06:02.159 Test: verify: APPTAG incorrect, no APPTAG check ...[2024-07-15 20:22:23.454417] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:02.159 passed 00:06:02.159 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:02.159 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:02.159 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 20:22:23.454743] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:02.159 passed 00:06:02.159 Test: verify copy: DIF generated, GUARD check ...passed 00:06:02.159 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:02.159 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:02.159 Test: verify copy: DIF not generated, GUARD check ...passed 00:06:02.159 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 20:22:23.455217] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:02.159 passed 00:06:02.159 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 20:22:23.455293] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:02.159 [2024-07-15 20:22:23.455383] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:02.159 passed 00:06:02.159 Test: generate copy: DIF generated, GUARD check ...passed 00:06:02.159 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:02.159 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:02.159 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:02.159 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:02.159 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:02.159 Test: generate copy: iovecs-len validate ...[2024-07-15 20:22:23.455741] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:02.159 passed 00:06:02.159 Test: generate copy: buffer alignment validate ...passed 00:06:02.159 00:06:02.159 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.159 suites 1 1 n/a 0 0 00:06:02.159 tests 26 26 26 0 0 00:06:02.159 asserts 115 115 115 0 n/a 00:06:02.159 00:06:02.159 Elapsed time = 0.005 seconds 00:06:02.159 00:06:02.159 real 0m0.475s 00:06:02.159 user 0m0.538s 00:06:02.159 sys 0m0.101s 00:06:02.159 20:22:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.159 ************************************ 00:06:02.159 END TEST accel_dif_functional_tests 00:06:02.159 ************************************ 00:06:02.159 20:22:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:02.419 20:22:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.419 00:06:02.419 real 0m31.101s 00:06:02.419 user 0m33.254s 00:06:02.419 sys 0m2.903s 00:06:02.420 ************************************ 00:06:02.420 END TEST accel 00:06:02.420 ************************************ 00:06:02.420 20:22:23 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.420 20:22:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.420 20:22:23 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.420 20:22:23 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:02.420 20:22:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.420 20:22:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.420 20:22:23 -- common/autotest_common.sh@10 -- # set +x 00:06:02.420 ************************************ 00:06:02.420 START TEST accel_rpc 00:06:02.420 ************************************ 00:06:02.420 20:22:23 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:02.420 * Looking for test storage... 00:06:02.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:02.420 20:22:23 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:02.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.420 20:22:23 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64457 00:06:02.420 20:22:23 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 64457 00:06:02.420 20:22:23 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:02.420 20:22:23 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 64457 ']' 00:06:02.420 20:22:23 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.420 20:22:23 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.420 20:22:23 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.420 20:22:23 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.420 20:22:23 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.420 [2024-07-15 20:22:23.862561] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:02.420 [2024-07-15 20:22:23.862665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64457 ] 00:06:02.679 [2024-07-15 20:22:23.994495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.679 [2024-07-15 20:22:24.053439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.614 20:22:24 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.614 20:22:24 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:03.614 20:22:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:03.614 20:22:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:03.614 20:22:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:03.614 20:22:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:03.614 20:22:24 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:03.614 20:22:24 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.614 20:22:24 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.614 20:22:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.614 ************************************ 00:06:03.614 START TEST accel_assign_opcode 00:06:03.614 ************************************ 00:06:03.614 20:22:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:03.614 20:22:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:03.614 20:22:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.614 20:22:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:03.614 [2024-07-15 20:22:24.874110] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:03.614 20:22:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.614 20:22:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:03.614 20:22:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.614 20:22:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:03.615 [2024-07-15 20:22:24.882101] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:03.615 20:22:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.615 20:22:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:03.615 20:22:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.615 20:22:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:03.615 20:22:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.615 20:22:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:03.615 20:22:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:03.615 20:22:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.615 20:22:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:03.615 20:22:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:03.615 20:22:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.615 software 00:06:03.615 ************************************ 00:06:03.615 END TEST accel_assign_opcode 00:06:03.615 ************************************ 00:06:03.615 00:06:03.615 real 0m0.206s 00:06:03.615 user 0m0.056s 00:06:03.615 sys 0m0.010s 00:06:03.615 20:22:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.615 20:22:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:03.615 20:22:25 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:03.615 20:22:25 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 64457 00:06:03.615 20:22:25 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 64457 ']' 00:06:03.615 20:22:25 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 64457 00:06:03.615 20:22:25 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:03.872 20:22:25 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.872 20:22:25 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64457 00:06:03.872 killing process with pid 64457 00:06:03.872 20:22:25 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.872 20:22:25 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.872 20:22:25 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64457' 00:06:03.872 20:22:25 accel_rpc -- common/autotest_common.sh@967 -- # kill 64457 00:06:03.872 20:22:25 accel_rpc -- common/autotest_common.sh@972 -- # wait 64457 00:06:04.131 00:06:04.131 real 0m1.680s 00:06:04.131 user 0m1.922s 00:06:04.131 sys 0m0.332s 00:06:04.131 20:22:25 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.131 20:22:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.131 ************************************ 00:06:04.131 END TEST accel_rpc 00:06:04.131 ************************************ 00:06:04.131 20:22:25 -- common/autotest_common.sh@1142 -- # return 0 00:06:04.131 20:22:25 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:04.131 20:22:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.131 20:22:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.131 20:22:25 -- common/autotest_common.sh@10 -- # set +x 00:06:04.131 ************************************ 00:06:04.131 START TEST app_cmdline 00:06:04.131 ************************************ 00:06:04.131 20:22:25 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:04.131 * Looking for test storage... 00:06:04.131 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:04.131 20:22:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:04.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.131 20:22:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64563 00:06:04.131 20:22:25 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:04.131 20:22:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64563 00:06:04.131 20:22:25 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 64563 ']' 00:06:04.131 20:22:25 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.131 20:22:25 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.131 20:22:25 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.132 20:22:25 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.132 20:22:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:04.132 [2024-07-15 20:22:25.599487] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:04.132 [2024-07-15 20:22:25.599961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64563 ] 00:06:04.390 [2024-07-15 20:22:25.746699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.390 [2024-07-15 20:22:25.807042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.329 20:22:26 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.329 20:22:26 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:05.329 20:22:26 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:05.329 { 00:06:05.329 "fields": { 00:06:05.329 "commit": "f8598a71f", 00:06:05.329 "major": 24, 00:06:05.329 "minor": 9, 00:06:05.329 "patch": 0, 00:06:05.329 "suffix": "-pre" 00:06:05.329 }, 00:06:05.329 "version": "SPDK v24.09-pre git sha1 f8598a71f" 00:06:05.329 } 00:06:05.329 20:22:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:05.329 20:22:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:05.329 20:22:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:05.329 20:22:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:05.329 20:22:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:05.329 20:22:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:05.329 20:22:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:05.329 20:22:26 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.329 20:22:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:05.589 20:22:26 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.589 20:22:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:05.589 20:22:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:05.589 20:22:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.589 20:22:26 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:05.589 20:22:26 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.589 20:22:26 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:05.589 20:22:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.589 20:22:26 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:05.589 20:22:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.589 20:22:26 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:05.589 20:22:26 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.589 20:22:26 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:05.589 20:22:26 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:05.589 20:22:26 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.847 2024/07/15 20:22:27 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:05.847 request: 00:06:05.847 { 00:06:05.847 "method": "env_dpdk_get_mem_stats", 00:06:05.847 "params": {} 00:06:05.847 } 00:06:05.847 Got JSON-RPC error response 00:06:05.847 GoRPCClient: error on JSON-RPC call 00:06:05.847 20:22:27 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:05.847 20:22:27 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:05.847 20:22:27 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:05.847 20:22:27 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:05.847 20:22:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64563 00:06:05.847 20:22:27 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 64563 ']' 00:06:05.847 20:22:27 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 64563 00:06:05.847 20:22:27 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:05.847 20:22:27 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.847 20:22:27 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64563 00:06:05.847 20:22:27 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.847 killing process with pid 64563 00:06:05.847 20:22:27 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.847 20:22:27 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64563' 00:06:05.847 20:22:27 app_cmdline -- common/autotest_common.sh@967 -- # kill 64563 00:06:05.847 20:22:27 app_cmdline -- common/autotest_common.sh@972 -- # wait 64563 00:06:06.106 00:06:06.106 real 0m1.991s 00:06:06.106 user 0m2.641s 00:06:06.106 sys 0m0.382s 00:06:06.106 20:22:27 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.106 20:22:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:06.106 ************************************ 00:06:06.106 END TEST app_cmdline 00:06:06.106 ************************************ 00:06:06.106 20:22:27 -- common/autotest_common.sh@1142 -- # return 0 00:06:06.106 20:22:27 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:06.106 20:22:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.106 20:22:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.106 20:22:27 -- common/autotest_common.sh@10 -- # set +x 00:06:06.106 ************************************ 00:06:06.106 START TEST version 00:06:06.106 ************************************ 00:06:06.106 20:22:27 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:06.106 * Looking for test storage... 00:06:06.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:06.106 20:22:27 version -- app/version.sh@17 -- # get_header_version major 00:06:06.106 20:22:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:06.106 20:22:27 version -- app/version.sh@14 -- # cut -f2 00:06:06.106 20:22:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.106 20:22:27 version -- app/version.sh@17 -- # major=24 00:06:06.106 20:22:27 version -- app/version.sh@18 -- # get_header_version minor 00:06:06.106 20:22:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:06.106 20:22:27 version -- app/version.sh@14 -- # cut -f2 00:06:06.106 20:22:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.106 20:22:27 version -- app/version.sh@18 -- # minor=9 00:06:06.106 20:22:27 version -- app/version.sh@19 -- # get_header_version patch 00:06:06.106 20:22:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:06.106 20:22:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.106 20:22:27 version -- app/version.sh@14 -- # cut -f2 00:06:06.106 20:22:27 version -- app/version.sh@19 -- # patch=0 00:06:06.106 20:22:27 version -- app/version.sh@20 -- # get_header_version suffix 00:06:06.106 20:22:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:06.106 20:22:27 version -- app/version.sh@14 -- # cut -f2 00:06:06.106 20:22:27 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.106 20:22:27 version -- app/version.sh@20 -- # suffix=-pre 00:06:06.106 20:22:27 version -- app/version.sh@22 -- # version=24.9 00:06:06.106 20:22:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:06.106 20:22:27 version -- app/version.sh@28 -- # version=24.9rc0 00:06:06.106 20:22:27 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:06.106 20:22:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:06.365 20:22:27 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:06.365 20:22:27 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:06.365 00:06:06.365 real 0m0.142s 00:06:06.365 user 0m0.079s 00:06:06.365 sys 0m0.092s 00:06:06.365 20:22:27 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.365 20:22:27 version -- common/autotest_common.sh@10 -- # set +x 00:06:06.365 ************************************ 00:06:06.365 END TEST version 00:06:06.365 ************************************ 00:06:06.365 20:22:27 -- common/autotest_common.sh@1142 -- # return 0 00:06:06.365 20:22:27 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:06.365 20:22:27 -- spdk/autotest.sh@198 -- # uname -s 00:06:06.365 20:22:27 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:06.365 20:22:27 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:06.365 20:22:27 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:06.365 20:22:27 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:06.365 20:22:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:06.365 20:22:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:06.365 20:22:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:06.365 20:22:27 -- common/autotest_common.sh@10 -- # set +x 00:06:06.365 20:22:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:06.365 20:22:27 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:06.365 20:22:27 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:06.365 20:22:27 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:06.365 20:22:27 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:06.365 20:22:27 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:06.365 20:22:27 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:06.365 20:22:27 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:06.365 20:22:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.365 20:22:27 -- common/autotest_common.sh@10 -- # set +x 00:06:06.365 ************************************ 00:06:06.365 START TEST nvmf_tcp 00:06:06.365 ************************************ 00:06:06.365 20:22:27 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:06.365 * Looking for test storage... 00:06:06.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:06.365 20:22:27 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.365 20:22:27 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.365 20:22:27 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.365 20:22:27 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.365 20:22:27 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.365 20:22:27 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.365 20:22:27 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:06.365 20:22:27 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:06.365 20:22:27 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:06.365 20:22:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:06.365 20:22:27 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:06.365 20:22:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:06.365 20:22:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.365 20:22:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.365 ************************************ 00:06:06.365 START TEST nvmf_example 00:06:06.365 ************************************ 00:06:06.365 20:22:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:06.624 * Looking for test storage... 00:06:06.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:06.624 20:22:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:06.625 Cannot find device "nvmf_init_br" 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:06.625 Cannot find device "nvmf_tgt_br" 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:06.625 Cannot find device "nvmf_tgt_br2" 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:06.625 Cannot find device "nvmf_init_br" 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:06.625 Cannot find device "nvmf_tgt_br" 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:06:06.625 20:22:27 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:06.625 Cannot find device "nvmf_tgt_br2" 00:06:06.625 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:06:06.625 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:06.625 Cannot find device "nvmf_br" 00:06:06.625 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:06:06.625 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:06.625 Cannot find device "nvmf_init_if" 00:06:06.625 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:06:06.625 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:06.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:06.625 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:06:06.625 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:06.625 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:06.625 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:06:06.625 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:06.625 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:06.625 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:06.625 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:06.625 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:06.625 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:06.625 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:06.625 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:06.625 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:06.625 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:06.625 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:06.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:06.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:06:06.884 00:06:06.884 --- 10.0.0.2 ping statistics --- 00:06:06.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:06.884 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:06.884 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:06.884 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:06:06.884 00:06:06.884 --- 10.0.0.3 ping statistics --- 00:06:06.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:06.884 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:06.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:06.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:06:06.884 00:06:06.884 --- 10.0.0.1 ping statistics --- 00:06:06.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:06.884 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=64908 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 64908 00:06:06.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 64908 ']' 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.884 20:22:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:06:08.257 20:22:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:18.246 Initializing NVMe Controllers 00:06:18.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:18.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:18.246 Initialization complete. Launching workers. 00:06:18.246 ======================================================== 00:06:18.246 Latency(us) 00:06:18.246 Device Information : IOPS MiB/s Average min max 00:06:18.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14694.44 57.40 4354.96 755.64 26087.85 00:06:18.246 ======================================================== 00:06:18.246 Total : 14694.44 57.40 4354.96 755.64 26087.85 00:06:18.246 00:06:18.246 20:22:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:18.246 20:22:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:18.246 20:22:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:18.246 20:22:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:18.504 rmmod nvme_tcp 00:06:18.504 rmmod nvme_fabrics 00:06:18.504 rmmod nvme_keyring 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 64908 ']' 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 64908 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 64908 ']' 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 64908 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64908 00:06:18.504 killing process with pid 64908 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64908' 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 64908 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 64908 00:06:18.504 nvmf threads initialize successfully 00:06:18.504 bdev subsystem init successfully 00:06:18.504 created a nvmf target service 00:06:18.504 create targets's poll groups done 00:06:18.504 all subsystems of target started 00:06:18.504 nvmf target is running 00:06:18.504 all subsystems of target stopped 00:06:18.504 destroy targets's poll groups done 00:06:18.504 destroyed the nvmf target service 00:06:18.504 bdev subsystem finish successfully 00:06:18.504 nvmf threads destroy successfully 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:18.504 20:22:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:18.505 20:22:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:18.505 20:22:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.505 20:22:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:18.505 20:22:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:18.764 20:22:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:18.764 20:22:40 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:18.764 20:22:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:18.764 20:22:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:18.764 00:06:18.764 real 0m12.247s 00:06:18.764 user 0m44.304s 00:06:18.764 sys 0m1.885s 00:06:18.764 20:22:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.764 20:22:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:18.764 ************************************ 00:06:18.764 END TEST nvmf_example 00:06:18.764 ************************************ 00:06:18.764 20:22:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:18.764 20:22:40 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:18.764 20:22:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:18.764 20:22:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.764 20:22:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:18.764 ************************************ 00:06:18.764 START TEST nvmf_filesystem 00:06:18.764 ************************************ 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:18.764 * Looking for test storage... 00:06:18.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:18.764 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:18.765 #define SPDK_CONFIG_H 00:06:18.765 #define SPDK_CONFIG_APPS 1 00:06:18.765 #define SPDK_CONFIG_ARCH native 00:06:18.765 #undef SPDK_CONFIG_ASAN 00:06:18.765 #define SPDK_CONFIG_AVAHI 1 00:06:18.765 #undef SPDK_CONFIG_CET 00:06:18.765 #define SPDK_CONFIG_COVERAGE 1 00:06:18.765 #define SPDK_CONFIG_CROSS_PREFIX 00:06:18.765 #undef SPDK_CONFIG_CRYPTO 00:06:18.765 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:18.765 #undef SPDK_CONFIG_CUSTOMOCF 00:06:18.765 #undef SPDK_CONFIG_DAOS 00:06:18.765 #define SPDK_CONFIG_DAOS_DIR 00:06:18.765 #define SPDK_CONFIG_DEBUG 1 00:06:18.765 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:18.765 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:18.765 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:18.765 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:18.765 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:18.765 #undef SPDK_CONFIG_DPDK_UADK 00:06:18.765 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:18.765 #define SPDK_CONFIG_EXAMPLES 1 00:06:18.765 #undef SPDK_CONFIG_FC 00:06:18.765 #define SPDK_CONFIG_FC_PATH 00:06:18.765 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:18.765 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:18.765 #undef SPDK_CONFIG_FUSE 00:06:18.765 #undef SPDK_CONFIG_FUZZER 00:06:18.765 #define SPDK_CONFIG_FUZZER_LIB 00:06:18.765 #define SPDK_CONFIG_GOLANG 1 00:06:18.765 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:18.765 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:18.765 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:18.765 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:18.765 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:18.765 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:18.765 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:18.765 #define SPDK_CONFIG_IDXD 1 00:06:18.765 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:18.765 #undef SPDK_CONFIG_IPSEC_MB 00:06:18.765 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:18.765 #define SPDK_CONFIG_ISAL 1 00:06:18.765 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:18.765 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:18.765 #define SPDK_CONFIG_LIBDIR 00:06:18.765 #undef SPDK_CONFIG_LTO 00:06:18.765 #define SPDK_CONFIG_MAX_LCORES 128 00:06:18.765 #define SPDK_CONFIG_NVME_CUSE 1 00:06:18.765 #undef SPDK_CONFIG_OCF 00:06:18.765 #define SPDK_CONFIG_OCF_PATH 00:06:18.765 #define SPDK_CONFIG_OPENSSL_PATH 00:06:18.765 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:18.765 #define SPDK_CONFIG_PGO_DIR 00:06:18.765 #undef SPDK_CONFIG_PGO_USE 00:06:18.765 #define SPDK_CONFIG_PREFIX /usr/local 00:06:18.765 #undef SPDK_CONFIG_RAID5F 00:06:18.765 #undef SPDK_CONFIG_RBD 00:06:18.765 #define SPDK_CONFIG_RDMA 1 00:06:18.765 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:18.765 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:18.765 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:18.765 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:18.765 #define SPDK_CONFIG_SHARED 1 00:06:18.765 #undef SPDK_CONFIG_SMA 00:06:18.765 #define SPDK_CONFIG_TESTS 1 00:06:18.765 #undef SPDK_CONFIG_TSAN 00:06:18.765 #define SPDK_CONFIG_UBLK 1 00:06:18.765 #define SPDK_CONFIG_UBSAN 1 00:06:18.765 #undef SPDK_CONFIG_UNIT_TESTS 00:06:18.765 #undef SPDK_CONFIG_URING 00:06:18.765 #define SPDK_CONFIG_URING_PATH 00:06:18.765 #undef SPDK_CONFIG_URING_ZNS 00:06:18.765 #define SPDK_CONFIG_USDT 1 00:06:18.765 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:18.765 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:18.765 #undef SPDK_CONFIG_VFIO_USER 00:06:18.765 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:18.765 #define SPDK_CONFIG_VHOST 1 00:06:18.765 #define SPDK_CONFIG_VIRTIO 1 00:06:18.765 #undef SPDK_CONFIG_VTUNE 00:06:18.765 #define SPDK_CONFIG_VTUNE_DIR 00:06:18.765 #define SPDK_CONFIG_WERROR 1 00:06:18.765 #define SPDK_CONFIG_WPDK_DIR 00:06:18.765 #undef SPDK_CONFIG_XNVME 00:06:18.765 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.765 20:22:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:18.766 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 1 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:19.026 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 65160 ]] 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 65160 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.EG135B 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.EG135B/tests/target /tmp/spdk.EG135B 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6264512512 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2494353408 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12804096 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13786296320 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5244047360 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13786296320 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5244047360 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:19.027 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267756544 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=135168 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=96612425728 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3090354176 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:19.028 * Looking for test storage... 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=13786296320 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:19.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.028 20:22:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:19.029 Cannot find device "nvmf_tgt_br" 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:19.029 Cannot find device "nvmf_tgt_br2" 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:19.029 Cannot find device "nvmf_tgt_br" 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:19.029 Cannot find device "nvmf_tgt_br2" 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:19.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:19.029 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:19.029 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:19.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:19.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:06:19.289 00:06:19.289 --- 10.0.0.2 ping statistics --- 00:06:19.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.289 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:19.289 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:19.289 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:06:19.289 00:06:19.289 --- 10.0.0.3 ping statistics --- 00:06:19.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.289 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:19.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:19.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:06:19.289 00:06:19.289 --- 10.0.0.1 ping statistics --- 00:06:19.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.289 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:19.289 ************************************ 00:06:19.289 START TEST nvmf_filesystem_no_in_capsule 00:06:19.289 ************************************ 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65313 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65313 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65313 ']' 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.289 20:22:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:19.609 [2024-07-15 20:22:40.810314] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:19.609 [2024-07-15 20:22:40.810415] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:19.609 [2024-07-15 20:22:40.959016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.609 [2024-07-15 20:22:41.032611] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:19.609 [2024-07-15 20:22:41.032731] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:19.609 [2024-07-15 20:22:41.032745] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:19.609 [2024-07-15 20:22:41.032755] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:19.609 [2024-07-15 20:22:41.032765] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:19.609 [2024-07-15 20:22:41.034929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.609 [2024-07-15 20:22:41.035149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.609 [2024-07-15 20:22:41.035352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.609 [2024-07-15 20:22:41.035358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.551 [2024-07-15 20:22:41.835234] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.551 Malloc1 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.551 [2024-07-15 20:22:41.957126] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:20.551 { 00:06:20.551 "aliases": [ 00:06:20.551 "c93e2479-859a-4463-864b-22019a2ecf47" 00:06:20.551 ], 00:06:20.551 "assigned_rate_limits": { 00:06:20.551 "r_mbytes_per_sec": 0, 00:06:20.551 "rw_ios_per_sec": 0, 00:06:20.551 "rw_mbytes_per_sec": 0, 00:06:20.551 "w_mbytes_per_sec": 0 00:06:20.551 }, 00:06:20.551 "block_size": 512, 00:06:20.551 "claim_type": "exclusive_write", 00:06:20.551 "claimed": true, 00:06:20.551 "driver_specific": {}, 00:06:20.551 "memory_domains": [ 00:06:20.551 { 00:06:20.551 "dma_device_id": "system", 00:06:20.551 "dma_device_type": 1 00:06:20.551 }, 00:06:20.551 { 00:06:20.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.551 "dma_device_type": 2 00:06:20.551 } 00:06:20.551 ], 00:06:20.551 "name": "Malloc1", 00:06:20.551 "num_blocks": 1048576, 00:06:20.551 "product_name": "Malloc disk", 00:06:20.551 "supported_io_types": { 00:06:20.551 "abort": true, 00:06:20.551 "compare": false, 00:06:20.551 "compare_and_write": false, 00:06:20.551 "copy": true, 00:06:20.551 "flush": true, 00:06:20.551 "get_zone_info": false, 00:06:20.551 "nvme_admin": false, 00:06:20.551 "nvme_io": false, 00:06:20.551 "nvme_io_md": false, 00:06:20.551 "nvme_iov_md": false, 00:06:20.551 "read": true, 00:06:20.551 "reset": true, 00:06:20.551 "seek_data": false, 00:06:20.551 "seek_hole": false, 00:06:20.551 "unmap": true, 00:06:20.551 "write": true, 00:06:20.551 "write_zeroes": true, 00:06:20.551 "zcopy": true, 00:06:20.551 "zone_append": false, 00:06:20.551 "zone_management": false 00:06:20.551 }, 00:06:20.551 "uuid": "c93e2479-859a-4463-864b-22019a2ecf47", 00:06:20.551 "zoned": false 00:06:20.551 } 00:06:20.551 ]' 00:06:20.551 20:22:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:20.551 20:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:20.551 20:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:20.809 20:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:20.809 20:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:20.809 20:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:20.809 20:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:20.809 20:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:20.809 20:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:20.809 20:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:20.809 20:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:20.809 20:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:20.809 20:22:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:23.336 20:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:23.336 20:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:23.336 20:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:23.336 20:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:23.336 20:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:23.336 20:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:23.336 20:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:23.336 20:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:23.336 20:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:23.336 20:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:23.336 20:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:23.336 20:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:23.336 20:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:23.336 20:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:23.336 20:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:23.336 20:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:23.336 20:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:23.336 20:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:23.336 20:22:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.272 ************************************ 00:06:24.272 START TEST filesystem_ext4 00:06:24.272 ************************************ 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:24.272 mke2fs 1.46.5 (30-Dec-2021) 00:06:24.272 Discarding device blocks: 0/522240 done 00:06:24.272 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:24.272 Filesystem UUID: d15ebd8b-9450-4f18-9ee5-92516bbb95c3 00:06:24.272 Superblock backups stored on blocks: 00:06:24.272 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:24.272 00:06:24.272 Allocating group tables: 0/64 done 00:06:24.272 Writing inode tables: 0/64 done 00:06:24.272 Creating journal (8192 blocks): done 00:06:24.272 Writing superblocks and filesystem accounting information: 0/64 done 00:06:24.272 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 65313 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:24.272 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:24.531 ************************************ 00:06:24.531 END TEST filesystem_ext4 00:06:24.531 ************************************ 00:06:24.531 00:06:24.531 real 0m0.302s 00:06:24.531 user 0m0.022s 00:06:24.531 sys 0m0.046s 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.531 ************************************ 00:06:24.531 START TEST filesystem_btrfs 00:06:24.531 ************************************ 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:24.531 btrfs-progs v6.6.2 00:06:24.531 See https://btrfs.readthedocs.io for more information. 00:06:24.531 00:06:24.531 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:24.531 NOTE: several default settings have changed in version 5.15, please make sure 00:06:24.531 this does not affect your deployments: 00:06:24.531 - DUP for metadata (-m dup) 00:06:24.531 - enabled no-holes (-O no-holes) 00:06:24.531 - enabled free-space-tree (-R free-space-tree) 00:06:24.531 00:06:24.531 Label: (null) 00:06:24.531 UUID: b6c5b391-5ea0-4ccf-957f-7120ad3ec572 00:06:24.531 Node size: 16384 00:06:24.531 Sector size: 4096 00:06:24.531 Filesystem size: 510.00MiB 00:06:24.531 Block group profiles: 00:06:24.531 Data: single 8.00MiB 00:06:24.531 Metadata: DUP 32.00MiB 00:06:24.531 System: DUP 8.00MiB 00:06:24.531 SSD detected: yes 00:06:24.531 Zoned device: no 00:06:24.531 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:24.531 Runtime features: free-space-tree 00:06:24.531 Checksum: crc32c 00:06:24.531 Number of devices: 1 00:06:24.531 Devices: 00:06:24.531 ID SIZE PATH 00:06:24.531 1 510.00MiB /dev/nvme0n1p1 00:06:24.531 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:24.531 20:22:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:24.531 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 65313 00:06:24.531 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:24.531 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:24.531 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:24.531 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:24.531 00:06:24.531 real 0m0.197s 00:06:24.531 user 0m0.018s 00:06:24.531 sys 0m0.060s 00:06:24.531 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.531 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:24.531 ************************************ 00:06:24.531 END TEST filesystem_btrfs 00:06:24.531 ************************************ 00:06:24.790 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:24.790 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:24.790 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:24.790 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.790 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:24.790 ************************************ 00:06:24.790 START TEST filesystem_xfs 00:06:24.790 ************************************ 00:06:24.790 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:24.790 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:24.790 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:24.790 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:24.790 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:24.790 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:24.790 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:24.790 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:06:24.790 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:24.790 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:24.790 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:24.790 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:24.790 = sectsz=512 attr=2, projid32bit=1 00:06:24.790 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:24.790 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:24.790 data = bsize=4096 blocks=130560, imaxpct=25 00:06:24.790 = sunit=0 swidth=0 blks 00:06:24.790 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:24.790 log =internal log bsize=4096 blocks=16384, version=2 00:06:24.790 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:24.790 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:25.355 Discarding blocks...Done. 00:06:25.355 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:25.355 20:22:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 65313 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:27.879 00:06:27.879 real 0m3.116s 00:06:27.879 user 0m0.020s 00:06:27.879 sys 0m0.058s 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:27.879 ************************************ 00:06:27.879 END TEST filesystem_xfs 00:06:27.879 ************************************ 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:27.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 65313 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65313 ']' 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65313 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65313 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.879 killing process with pid 65313 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65313' 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 65313 00:06:27.879 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 65313 00:06:28.136 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:28.136 00:06:28.136 real 0m8.878s 00:06:28.136 user 0m33.507s 00:06:28.136 sys 0m1.516s 00:06:28.136 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.136 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.136 ************************************ 00:06:28.136 END TEST nvmf_filesystem_no_in_capsule 00:06:28.136 ************************************ 00:06:28.393 20:22:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:28.393 20:22:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:28.393 20:22:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:28.393 20:22:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.393 20:22:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.393 ************************************ 00:06:28.393 START TEST nvmf_filesystem_in_capsule 00:06:28.393 ************************************ 00:06:28.393 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:06:28.393 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:28.393 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:28.393 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:28.393 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:28.393 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.393 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65628 00:06:28.393 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:28.393 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65628 00:06:28.393 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 65628 ']' 00:06:28.393 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.393 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.393 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.393 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.393 20:22:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:28.393 [2024-07-15 20:22:49.735259] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:28.393 [2024-07-15 20:22:49.735382] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:28.393 [2024-07-15 20:22:49.874035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.651 [2024-07-15 20:22:49.964527] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:28.651 [2024-07-15 20:22:49.964617] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:28.651 [2024-07-15 20:22:49.964639] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:28.651 [2024-07-15 20:22:49.964656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:28.651 [2024-07-15 20:22:49.964710] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:28.651 [2024-07-15 20:22:49.964857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.651 [2024-07-15 20:22:49.965458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.651 [2024-07-15 20:22:49.965546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.651 [2024-07-15 20:22:49.965764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.587 [2024-07-15 20:22:50.762615] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.587 Malloc1 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.587 [2024-07-15 20:22:50.898768] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:06:29.587 { 00:06:29.587 "aliases": [ 00:06:29.587 "82d78601-d8ec-4f49-8276-f3fce1f38a6e" 00:06:29.587 ], 00:06:29.587 "assigned_rate_limits": { 00:06:29.587 "r_mbytes_per_sec": 0, 00:06:29.587 "rw_ios_per_sec": 0, 00:06:29.587 "rw_mbytes_per_sec": 0, 00:06:29.587 "w_mbytes_per_sec": 0 00:06:29.587 }, 00:06:29.587 "block_size": 512, 00:06:29.587 "claim_type": "exclusive_write", 00:06:29.587 "claimed": true, 00:06:29.587 "driver_specific": {}, 00:06:29.587 "memory_domains": [ 00:06:29.587 { 00:06:29.587 "dma_device_id": "system", 00:06:29.587 "dma_device_type": 1 00:06:29.587 }, 00:06:29.587 { 00:06:29.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.587 "dma_device_type": 2 00:06:29.587 } 00:06:29.587 ], 00:06:29.587 "name": "Malloc1", 00:06:29.587 "num_blocks": 1048576, 00:06:29.587 "product_name": "Malloc disk", 00:06:29.587 "supported_io_types": { 00:06:29.587 "abort": true, 00:06:29.587 "compare": false, 00:06:29.587 "compare_and_write": false, 00:06:29.587 "copy": true, 00:06:29.587 "flush": true, 00:06:29.587 "get_zone_info": false, 00:06:29.587 "nvme_admin": false, 00:06:29.587 "nvme_io": false, 00:06:29.587 "nvme_io_md": false, 00:06:29.587 "nvme_iov_md": false, 00:06:29.587 "read": true, 00:06:29.587 "reset": true, 00:06:29.587 "seek_data": false, 00:06:29.587 "seek_hole": false, 00:06:29.587 "unmap": true, 00:06:29.587 "write": true, 00:06:29.587 "write_zeroes": true, 00:06:29.587 "zcopy": true, 00:06:29.587 "zone_append": false, 00:06:29.587 "zone_management": false 00:06:29.587 }, 00:06:29.587 "uuid": "82d78601-d8ec-4f49-8276-f3fce1f38a6e", 00:06:29.587 "zoned": false 00:06:29.587 } 00:06:29.587 ]' 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:06:29.587 20:22:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:06:29.587 20:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:06:29.587 20:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:06:29.588 20:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:06:29.588 20:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:29.588 20:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:29.846 20:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:29.846 20:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:06:29.846 20:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:29.846 20:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:29.846 20:22:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:06:31.748 20:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:06:31.748 20:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:06:31.748 20:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:06:31.748 20:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:06:31.748 20:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:06:31.748 20:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:06:31.748 20:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:31.748 20:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:31.748 20:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:31.748 20:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:31.748 20:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:31.748 20:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:31.748 20:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:31.748 20:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:31.748 20:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:31.748 20:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:31.748 20:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:31.748 20:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:32.007 20:22:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:32.940 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:32.940 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:32.940 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:32.940 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.940 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:32.940 ************************************ 00:06:32.940 START TEST filesystem_in_capsule_ext4 00:06:32.940 ************************************ 00:06:32.940 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:32.941 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:32.941 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:32.941 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:32.941 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:06:32.941 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:32.941 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:06:32.941 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:06:32.941 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:06:32.941 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:06:32.941 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:32.941 mke2fs 1.46.5 (30-Dec-2021) 00:06:32.941 Discarding device blocks: 0/522240 done 00:06:32.941 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:32.941 Filesystem UUID: d299af7b-07cc-41e5-8ad2-b01c8ae259be 00:06:32.941 Superblock backups stored on blocks: 00:06:32.941 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:32.941 00:06:32.941 Allocating group tables: 0/64 done 00:06:32.941 Writing inode tables: 0/64 done 00:06:32.941 Creating journal (8192 blocks): done 00:06:32.941 Writing superblocks and filesystem accounting information: 0/64 done 00:06:32.941 00:06:32.941 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:06:32.941 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:33.199 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:33.199 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:33.199 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:33.199 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:33.199 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:33.199 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:33.199 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 65628 00:06:33.199 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:33.199 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:33.199 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:33.199 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:33.199 00:06:33.199 real 0m0.339s 00:06:33.199 user 0m0.020s 00:06:33.199 sys 0m0.051s 00:06:33.199 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.199 ************************************ 00:06:33.199 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:33.199 END TEST filesystem_in_capsule_ext4 00:06:33.199 ************************************ 00:06:33.457 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:33.457 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:33.457 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:33.457 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.457 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:33.457 ************************************ 00:06:33.457 START TEST filesystem_in_capsule_btrfs 00:06:33.457 ************************************ 00:06:33.457 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:33.457 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:33.457 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:33.457 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:33.457 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:06:33.457 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:33.457 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:06:33.457 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:06:33.457 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:06:33.457 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:06:33.457 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:33.457 btrfs-progs v6.6.2 00:06:33.457 See https://btrfs.readthedocs.io for more information. 00:06:33.457 00:06:33.457 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:33.457 NOTE: several default settings have changed in version 5.15, please make sure 00:06:33.457 this does not affect your deployments: 00:06:33.457 - DUP for metadata (-m dup) 00:06:33.457 - enabled no-holes (-O no-holes) 00:06:33.457 - enabled free-space-tree (-R free-space-tree) 00:06:33.457 00:06:33.457 Label: (null) 00:06:33.457 UUID: 7c2622df-d35f-470e-9e28-58706dbeec38 00:06:33.457 Node size: 16384 00:06:33.457 Sector size: 4096 00:06:33.457 Filesystem size: 510.00MiB 00:06:33.457 Block group profiles: 00:06:33.458 Data: single 8.00MiB 00:06:33.458 Metadata: DUP 32.00MiB 00:06:33.458 System: DUP 8.00MiB 00:06:33.458 SSD detected: yes 00:06:33.458 Zoned device: no 00:06:33.458 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:33.458 Runtime features: free-space-tree 00:06:33.458 Checksum: crc32c 00:06:33.458 Number of devices: 1 00:06:33.458 Devices: 00:06:33.458 ID SIZE PATH 00:06:33.458 1 510.00MiB /dev/nvme0n1p1 00:06:33.458 00:06:33.458 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:06:33.458 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:33.458 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:33.458 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:33.458 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:33.458 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:33.458 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:33.458 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:33.458 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 65628 00:06:33.458 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:33.458 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:33.716 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:33.716 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:33.716 00:06:33.716 real 0m0.256s 00:06:33.716 user 0m0.021s 00:06:33.716 sys 0m0.063s 00:06:33.716 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.716 20:22:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:33.716 ************************************ 00:06:33.716 END TEST filesystem_in_capsule_btrfs 00:06:33.716 ************************************ 00:06:33.716 20:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:33.716 20:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:33.716 20:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:33.716 20:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.716 20:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:33.716 ************************************ 00:06:33.716 START TEST filesystem_in_capsule_xfs 00:06:33.716 ************************************ 00:06:33.716 20:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:06:33.716 20:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:33.716 20:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:33.716 20:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:33.716 20:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:06:33.716 20:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:06:33.716 20:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:06:33.716 20:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:06:33.716 20:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:06:33.716 20:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:06:33.716 20:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:33.716 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:33.717 = sectsz=512 attr=2, projid32bit=1 00:06:33.717 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:33.717 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:33.717 data = bsize=4096 blocks=130560, imaxpct=25 00:06:33.717 = sunit=0 swidth=0 blks 00:06:33.717 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:33.717 log =internal log bsize=4096 blocks=16384, version=2 00:06:33.717 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:33.717 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:34.284 Discarding blocks...Done. 00:06:34.284 20:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:06:34.284 20:22:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 65628 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:36.229 00:06:36.229 real 0m2.549s 00:06:36.229 user 0m0.024s 00:06:36.229 sys 0m0.048s 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:36.229 ************************************ 00:06:36.229 END TEST filesystem_in_capsule_xfs 00:06:36.229 ************************************ 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:36.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:06:36.229 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:36.489 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:06:36.489 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:36.489 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:06:36.489 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:36.489 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:36.489 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.489 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:36.489 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:36.489 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 65628 00:06:36.489 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 65628 ']' 00:06:36.489 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 65628 00:06:36.489 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:06:36.489 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.489 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65628 00:06:36.489 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.489 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.489 killing process with pid 65628 00:06:36.489 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65628' 00:06:36.489 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 65628 00:06:36.489 20:22:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 65628 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:36.747 00:06:36.747 real 0m8.406s 00:06:36.747 user 0m31.709s 00:06:36.747 sys 0m1.458s 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:36.747 ************************************ 00:06:36.747 END TEST nvmf_filesystem_in_capsule 00:06:36.747 ************************************ 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:36.747 rmmod nvme_tcp 00:06:36.747 rmmod nvme_fabrics 00:06:36.747 rmmod nvme_keyring 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:36.747 00:06:36.747 real 0m18.116s 00:06:36.747 user 1m5.458s 00:06:36.747 sys 0m3.363s 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.747 20:22:58 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:36.747 ************************************ 00:06:36.747 END TEST nvmf_filesystem 00:06:36.747 ************************************ 00:06:37.006 20:22:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:37.006 20:22:58 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:37.006 20:22:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:37.006 20:22:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.006 20:22:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:37.006 ************************************ 00:06:37.006 START TEST nvmf_target_discovery 00:06:37.006 ************************************ 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:37.006 * Looking for test storage... 00:06:37.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:37.006 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:37.007 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:37.007 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:37.007 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:37.007 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:37.007 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:37.007 Cannot find device "nvmf_tgt_br" 00:06:37.007 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:06:37.007 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:37.007 Cannot find device "nvmf_tgt_br2" 00:06:37.007 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:06:37.007 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:37.007 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:37.007 Cannot find device "nvmf_tgt_br" 00:06:37.007 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:06:37.007 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:37.007 Cannot find device "nvmf_tgt_br2" 00:06:37.007 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:06:37.007 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:37.007 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:37.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:37.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:37.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:37.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:06:37.265 00:06:37.265 --- 10.0.0.2 ping statistics --- 00:06:37.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.265 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:37.265 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:37.265 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:06:37.265 00:06:37.265 --- 10.0.0.3 ping statistics --- 00:06:37.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.265 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:37.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:37.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:06:37.265 00:06:37.265 --- 10.0.0.1 ping statistics --- 00:06:37.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.265 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.265 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=66076 00:06:37.266 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:37.266 20:22:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 66076 00:06:37.266 20:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 66076 ']' 00:06:37.266 20:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.266 20:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.266 20:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.266 20:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.266 20:22:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.524 [2024-07-15 20:22:58.829991] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:37.524 [2024-07-15 20:22:58.830102] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:37.524 [2024-07-15 20:22:58.982281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.783 [2024-07-15 20:22:59.055618] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:37.783 [2024-07-15 20:22:59.055692] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:37.783 [2024-07-15 20:22:59.055710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:37.783 [2024-07-15 20:22:59.055724] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:37.783 [2024-07-15 20:22:59.055736] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:37.783 [2024-07-15 20:22:59.055861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.783 [2024-07-15 20:22:59.059911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.783 [2024-07-15 20:22:59.060080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.783 [2024-07-15 20:22:59.060095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.783 [2024-07-15 20:22:59.187387] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.783 Null1 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.783 [2024-07-15 20:22:59.241859] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.783 Null2 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.783 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.784 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:37.784 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.784 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:37.784 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.784 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:37.784 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:37.784 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.784 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.042 Null3 00:06:38.042 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.042 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:38.042 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.042 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.042 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.042 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:38.042 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.042 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.042 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.042 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:38.042 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.042 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.043 Null4 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -a 10.0.0.2 -s 4420 00:06:38.043 00:06:38.043 Discovery Log Number of Records 6, Generation counter 6 00:06:38.043 =====Discovery Log Entry 0====== 00:06:38.043 trtype: tcp 00:06:38.043 adrfam: ipv4 00:06:38.043 subtype: current discovery subsystem 00:06:38.043 treq: not required 00:06:38.043 portid: 0 00:06:38.043 trsvcid: 4420 00:06:38.043 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:38.043 traddr: 10.0.0.2 00:06:38.043 eflags: explicit discovery connections, duplicate discovery information 00:06:38.043 sectype: none 00:06:38.043 =====Discovery Log Entry 1====== 00:06:38.043 trtype: tcp 00:06:38.043 adrfam: ipv4 00:06:38.043 subtype: nvme subsystem 00:06:38.043 treq: not required 00:06:38.043 portid: 0 00:06:38.043 trsvcid: 4420 00:06:38.043 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:38.043 traddr: 10.0.0.2 00:06:38.043 eflags: none 00:06:38.043 sectype: none 00:06:38.043 =====Discovery Log Entry 2====== 00:06:38.043 trtype: tcp 00:06:38.043 adrfam: ipv4 00:06:38.043 subtype: nvme subsystem 00:06:38.043 treq: not required 00:06:38.043 portid: 0 00:06:38.043 trsvcid: 4420 00:06:38.043 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:38.043 traddr: 10.0.0.2 00:06:38.043 eflags: none 00:06:38.043 sectype: none 00:06:38.043 =====Discovery Log Entry 3====== 00:06:38.043 trtype: tcp 00:06:38.043 adrfam: ipv4 00:06:38.043 subtype: nvme subsystem 00:06:38.043 treq: not required 00:06:38.043 portid: 0 00:06:38.043 trsvcid: 4420 00:06:38.043 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:38.043 traddr: 10.0.0.2 00:06:38.043 eflags: none 00:06:38.043 sectype: none 00:06:38.043 =====Discovery Log Entry 4====== 00:06:38.043 trtype: tcp 00:06:38.043 adrfam: ipv4 00:06:38.043 subtype: nvme subsystem 00:06:38.043 treq: not required 00:06:38.043 portid: 0 00:06:38.043 trsvcid: 4420 00:06:38.043 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:38.043 traddr: 10.0.0.2 00:06:38.043 eflags: none 00:06:38.043 sectype: none 00:06:38.043 =====Discovery Log Entry 5====== 00:06:38.043 trtype: tcp 00:06:38.043 adrfam: ipv4 00:06:38.043 subtype: discovery subsystem referral 00:06:38.043 treq: not required 00:06:38.043 portid: 0 00:06:38.043 trsvcid: 4430 00:06:38.043 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:38.043 traddr: 10.0.0.2 00:06:38.043 eflags: none 00:06:38.043 sectype: none 00:06:38.043 Perform nvmf subsystem discovery via RPC 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.043 [ 00:06:38.043 { 00:06:38.043 "allow_any_host": true, 00:06:38.043 "hosts": [], 00:06:38.043 "listen_addresses": [ 00:06:38.043 { 00:06:38.043 "adrfam": "IPv4", 00:06:38.043 "traddr": "10.0.0.2", 00:06:38.043 "trsvcid": "4420", 00:06:38.043 "trtype": "TCP" 00:06:38.043 } 00:06:38.043 ], 00:06:38.043 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:38.043 "subtype": "Discovery" 00:06:38.043 }, 00:06:38.043 { 00:06:38.043 "allow_any_host": true, 00:06:38.043 "hosts": [], 00:06:38.043 "listen_addresses": [ 00:06:38.043 { 00:06:38.043 "adrfam": "IPv4", 00:06:38.043 "traddr": "10.0.0.2", 00:06:38.043 "trsvcid": "4420", 00:06:38.043 "trtype": "TCP" 00:06:38.043 } 00:06:38.043 ], 00:06:38.043 "max_cntlid": 65519, 00:06:38.043 "max_namespaces": 32, 00:06:38.043 "min_cntlid": 1, 00:06:38.043 "model_number": "SPDK bdev Controller", 00:06:38.043 "namespaces": [ 00:06:38.043 { 00:06:38.043 "bdev_name": "Null1", 00:06:38.043 "name": "Null1", 00:06:38.043 "nguid": "3A329F86DBAF402786BFC99830BA8D53", 00:06:38.043 "nsid": 1, 00:06:38.043 "uuid": "3a329f86-dbaf-4027-86bf-c99830ba8d53" 00:06:38.043 } 00:06:38.043 ], 00:06:38.043 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:38.043 "serial_number": "SPDK00000000000001", 00:06:38.043 "subtype": "NVMe" 00:06:38.043 }, 00:06:38.043 { 00:06:38.043 "allow_any_host": true, 00:06:38.043 "hosts": [], 00:06:38.043 "listen_addresses": [ 00:06:38.043 { 00:06:38.043 "adrfam": "IPv4", 00:06:38.043 "traddr": "10.0.0.2", 00:06:38.043 "trsvcid": "4420", 00:06:38.043 "trtype": "TCP" 00:06:38.043 } 00:06:38.043 ], 00:06:38.043 "max_cntlid": 65519, 00:06:38.043 "max_namespaces": 32, 00:06:38.043 "min_cntlid": 1, 00:06:38.043 "model_number": "SPDK bdev Controller", 00:06:38.043 "namespaces": [ 00:06:38.043 { 00:06:38.043 "bdev_name": "Null2", 00:06:38.043 "name": "Null2", 00:06:38.043 "nguid": "83F6007B732A4902A1B8FA9D3E045F6E", 00:06:38.043 "nsid": 1, 00:06:38.043 "uuid": "83f6007b-732a-4902-a1b8-fa9d3e045f6e" 00:06:38.043 } 00:06:38.043 ], 00:06:38.043 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:38.043 "serial_number": "SPDK00000000000002", 00:06:38.043 "subtype": "NVMe" 00:06:38.043 }, 00:06:38.043 { 00:06:38.043 "allow_any_host": true, 00:06:38.043 "hosts": [], 00:06:38.043 "listen_addresses": [ 00:06:38.043 { 00:06:38.043 "adrfam": "IPv4", 00:06:38.043 "traddr": "10.0.0.2", 00:06:38.043 "trsvcid": "4420", 00:06:38.043 "trtype": "TCP" 00:06:38.043 } 00:06:38.043 ], 00:06:38.043 "max_cntlid": 65519, 00:06:38.043 "max_namespaces": 32, 00:06:38.043 "min_cntlid": 1, 00:06:38.043 "model_number": "SPDK bdev Controller", 00:06:38.043 "namespaces": [ 00:06:38.043 { 00:06:38.043 "bdev_name": "Null3", 00:06:38.043 "name": "Null3", 00:06:38.043 "nguid": "4D26CEADAC264704B87657FB78C72ED2", 00:06:38.043 "nsid": 1, 00:06:38.043 "uuid": "4d26cead-ac26-4704-b876-57fb78c72ed2" 00:06:38.043 } 00:06:38.043 ], 00:06:38.043 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:38.043 "serial_number": "SPDK00000000000003", 00:06:38.043 "subtype": "NVMe" 00:06:38.043 }, 00:06:38.043 { 00:06:38.043 "allow_any_host": true, 00:06:38.043 "hosts": [], 00:06:38.043 "listen_addresses": [ 00:06:38.043 { 00:06:38.043 "adrfam": "IPv4", 00:06:38.043 "traddr": "10.0.0.2", 00:06:38.043 "trsvcid": "4420", 00:06:38.043 "trtype": "TCP" 00:06:38.043 } 00:06:38.043 ], 00:06:38.043 "max_cntlid": 65519, 00:06:38.043 "max_namespaces": 32, 00:06:38.043 "min_cntlid": 1, 00:06:38.043 "model_number": "SPDK bdev Controller", 00:06:38.043 "namespaces": [ 00:06:38.043 { 00:06:38.043 "bdev_name": "Null4", 00:06:38.043 "name": "Null4", 00:06:38.043 "nguid": "4A285F72EA5E42C0AF90481AA9FEE62B", 00:06:38.043 "nsid": 1, 00:06:38.043 "uuid": "4a285f72-ea5e-42c0-af90-481aa9fee62b" 00:06:38.043 } 00:06:38.043 ], 00:06:38.043 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:38.043 "serial_number": "SPDK00000000000004", 00:06:38.043 "subtype": "NVMe" 00:06:38.043 } 00:06:38.043 ] 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.043 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:38.302 rmmod nvme_tcp 00:06:38.302 rmmod nvme_fabrics 00:06:38.302 rmmod nvme_keyring 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 66076 ']' 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 66076 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 66076 ']' 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 66076 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66076 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.302 killing process with pid 66076 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66076' 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 66076 00:06:38.302 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 66076 00:06:38.560 20:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:38.560 20:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:38.560 20:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:38.560 20:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:38.560 20:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:38.560 20:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.560 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:38.560 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.560 20:22:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:38.560 00:06:38.560 real 0m1.643s 00:06:38.560 user 0m3.386s 00:06:38.560 sys 0m0.521s 00:06:38.560 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.560 20:22:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:38.560 ************************************ 00:06:38.560 END TEST nvmf_target_discovery 00:06:38.560 ************************************ 00:06:38.560 20:22:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:38.560 20:22:59 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:38.560 20:22:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:38.560 20:22:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.560 20:22:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:38.560 ************************************ 00:06:38.560 START TEST nvmf_referrals 00:06:38.560 ************************************ 00:06:38.560 20:22:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:38.560 * Looking for test storage... 00:06:38.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:38.560 20:23:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:38.560 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:38.560 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.560 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.560 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.560 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.560 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.560 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.560 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.560 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.560 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.560 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:38.819 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:38.820 Cannot find device "nvmf_tgt_br" 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:38.820 Cannot find device "nvmf_tgt_br2" 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:38.820 Cannot find device "nvmf_tgt_br" 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:38.820 Cannot find device "nvmf_tgt_br2" 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:38.820 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:38.820 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:38.820 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:39.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:39.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:06:39.079 00:06:39.079 --- 10.0.0.2 ping statistics --- 00:06:39.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.079 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:39.079 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:39.079 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:06:39.079 00:06:39.079 --- 10.0.0.3 ping statistics --- 00:06:39.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.079 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:39.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:39.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:06:39.079 00:06:39.079 --- 10.0.0.1 ping statistics --- 00:06:39.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.079 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=66292 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 66292 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 66292 ']' 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.079 20:23:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:39.079 [2024-07-15 20:23:00.496776] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:39.079 [2024-07-15 20:23:00.496900] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.337 [2024-07-15 20:23:00.638306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.337 [2024-07-15 20:23:00.717696] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:39.337 [2024-07-15 20:23:00.717768] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:39.337 [2024-07-15 20:23:00.717782] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.337 [2024-07-15 20:23:00.717792] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.337 [2024-07-15 20:23:00.717801] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:39.337 [2024-07-15 20:23:00.718504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.337 [2024-07-15 20:23:00.718620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.337 [2024-07-15 20:23:00.718946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.337 [2024-07-15 20:23:00.718967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.270 [2024-07-15 20:23:01.550448] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.270 [2024-07-15 20:23:01.580839] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:40.270 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:40.528 20:23:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.528 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.528 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.785 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:40.785 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:40.785 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:40.785 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:40.785 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:40.785 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:40.785 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:40.785 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:40.785 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:40.785 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:40.785 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:40.786 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.044 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:41.303 rmmod nvme_tcp 00:06:41.303 rmmod nvme_fabrics 00:06:41.303 rmmod nvme_keyring 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 66292 ']' 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 66292 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 66292 ']' 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 66292 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66292 00:06:41.303 killing process with pid 66292 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66292' 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 66292 00:06:41.303 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 66292 00:06:41.561 20:23:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:41.561 20:23:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:41.561 20:23:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:41.561 20:23:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:41.561 20:23:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:41.561 20:23:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.561 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:41.561 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.561 20:23:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:41.561 00:06:41.561 real 0m3.010s 00:06:41.561 user 0m9.925s 00:06:41.561 sys 0m0.759s 00:06:41.561 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.561 20:23:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:41.561 ************************************ 00:06:41.561 END TEST nvmf_referrals 00:06:41.561 ************************************ 00:06:41.561 20:23:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:41.561 20:23:03 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:41.561 20:23:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:41.561 20:23:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.561 20:23:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:41.561 ************************************ 00:06:41.561 START TEST nvmf_connect_disconnect 00:06:41.561 ************************************ 00:06:41.561 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:41.820 * Looking for test storage... 00:06:41.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:41.820 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:41.821 Cannot find device "nvmf_tgt_br" 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:41.821 Cannot find device "nvmf_tgt_br2" 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:41.821 Cannot find device "nvmf_tgt_br" 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:41.821 Cannot find device "nvmf_tgt_br2" 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:41.821 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:41.821 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:41.821 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:42.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:06:42.079 00:06:42.079 --- 10.0.0.2 ping statistics --- 00:06:42.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.079 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:42.079 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:42.079 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:06:42.079 00:06:42.079 --- 10.0.0.3 ping statistics --- 00:06:42.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.079 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:42.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:06:42.079 00:06:42.079 --- 10.0.0.1 ping statistics --- 00:06:42.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.079 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=66593 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 66593 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 66593 ']' 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.079 20:23:03 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:42.079 [2024-07-15 20:23:03.550739] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:42.079 [2024-07-15 20:23:03.550849] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.337 [2024-07-15 20:23:03.693415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.337 [2024-07-15 20:23:03.757338] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.337 [2024-07-15 20:23:03.757403] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.337 [2024-07-15 20:23:03.757415] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:42.337 [2024-07-15 20:23:03.757424] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:42.337 [2024-07-15 20:23:03.757431] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.337 [2024-07-15 20:23:03.757568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.337 [2024-07-15 20:23:03.757843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.337 [2024-07-15 20:23:03.757845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.337 [2024-07-15 20:23:03.757679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:43.334 [2024-07-15 20:23:04.594893] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:43.334 [2024-07-15 20:23:04.655398] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:43.334 20:23:04 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:45.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:47.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:50.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:52.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:54.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:54.715 rmmod nvme_tcp 00:06:54.715 rmmod nvme_fabrics 00:06:54.715 rmmod nvme_keyring 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 66593 ']' 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 66593 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 66593 ']' 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 66593 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66593 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.715 killing process with pid 66593 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66593' 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 66593 00:06:54.715 20:23:15 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 66593 00:06:54.715 20:23:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:54.715 20:23:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:54.715 20:23:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:54.715 20:23:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:54.715 20:23:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:54.715 20:23:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.715 20:23:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:54.715 20:23:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.715 20:23:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:54.715 00:06:54.715 real 0m13.166s 00:06:54.715 user 0m48.577s 00:06:54.715 sys 0m1.894s 00:06:54.715 20:23:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.715 20:23:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:54.715 ************************************ 00:06:54.715 END TEST nvmf_connect_disconnect 00:06:54.715 ************************************ 00:06:54.974 20:23:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:54.974 20:23:16 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:54.974 20:23:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:54.974 20:23:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.974 20:23:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:54.974 ************************************ 00:06:54.974 START TEST nvmf_multitarget 00:06:54.974 ************************************ 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:54.974 * Looking for test storage... 00:06:54.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:54.974 Cannot find device "nvmf_tgt_br" 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:54.974 Cannot find device "nvmf_tgt_br2" 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:54.974 Cannot find device "nvmf_tgt_br" 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:54.974 Cannot find device "nvmf_tgt_br2" 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:54.974 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:55.232 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:55.232 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:55.232 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:06:55.232 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:55.232 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:55.232 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:06:55.232 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:55.232 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:55.232 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:55.232 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:55.232 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:55.232 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:55.232 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:55.232 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:55.232 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:55.232 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:55.232 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:55.232 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:55.232 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:55.232 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:55.232 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:55.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:55.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:06:55.233 00:06:55.233 --- 10.0.0.2 ping statistics --- 00:06:55.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.233 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:55.233 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:55.233 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:06:55.233 00:06:55.233 --- 10.0.0.3 ping statistics --- 00:06:55.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.233 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:55.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:55.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:06:55.233 00:06:55.233 --- 10.0.0.1 ping statistics --- 00:06:55.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.233 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=66984 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 66984 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 66984 ']' 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.233 20:23:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:55.491 [2024-07-15 20:23:16.752280] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:55.491 [2024-07-15 20:23:16.753271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.491 [2024-07-15 20:23:16.897272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.491 [2024-07-15 20:23:16.970623] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:55.491 [2024-07-15 20:23:16.970865] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:55.491 [2024-07-15 20:23:16.971116] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:55.491 [2024-07-15 20:23:16.971235] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:55.491 [2024-07-15 20:23:16.971250] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.491 [2024-07-15 20:23:16.971404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.491 [2024-07-15 20:23:16.972100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.491 [2024-07-15 20:23:16.972236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.491 [2024-07-15 20:23:16.972237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.749 20:23:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.749 20:23:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:06:55.749 20:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:55.749 20:23:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:55.749 20:23:17 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:55.749 20:23:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.749 20:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:06:55.749 20:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:55.749 20:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:06:55.749 20:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:06:55.749 20:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:06:56.008 "nvmf_tgt_1" 00:06:56.008 20:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:06:56.008 "nvmf_tgt_2" 00:06:56.008 20:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:56.008 20:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:06:56.266 20:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:06:56.266 20:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:06:56.266 true 00:06:56.524 20:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:06:56.524 true 00:06:56.524 20:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:56.524 20:23:17 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:56.782 rmmod nvme_tcp 00:06:56.782 rmmod nvme_fabrics 00:06:56.782 rmmod nvme_keyring 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 66984 ']' 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 66984 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 66984 ']' 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 66984 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66984 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:56.782 killing process with pid 66984 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66984' 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 66984 00:06:56.782 20:23:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 66984 00:06:57.041 20:23:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:57.042 20:23:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:57.042 20:23:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:57.042 20:23:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:57.042 20:23:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:57.042 20:23:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.042 20:23:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:57.042 20:23:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.042 20:23:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:57.042 00:06:57.042 real 0m2.200s 00:06:57.042 user 0m6.777s 00:06:57.042 sys 0m0.624s 00:06:57.042 20:23:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.042 ************************************ 00:06:57.042 END TEST nvmf_multitarget 00:06:57.042 ************************************ 00:06:57.042 20:23:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:57.042 20:23:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:06:57.042 20:23:18 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:57.042 20:23:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:57.042 20:23:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.042 20:23:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.042 ************************************ 00:06:57.042 START TEST nvmf_rpc 00:06:57.042 ************************************ 00:06:57.042 20:23:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:57.301 * Looking for test storage... 00:06:57.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:57.301 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:57.302 Cannot find device "nvmf_tgt_br" 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:57.302 Cannot find device "nvmf_tgt_br2" 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:57.302 Cannot find device "nvmf_tgt_br" 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:57.302 Cannot find device "nvmf_tgt_br2" 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:57.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:57.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:57.302 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:57.559 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:57.559 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:57.559 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:57.559 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:57.559 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:57.559 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:57.559 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:57.559 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:57.559 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:57.559 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:57.559 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:57.559 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:57.559 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:57.560 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:57.560 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:57.560 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:57.560 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:57.560 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:57.560 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:57.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:57.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:06:57.560 00:06:57.560 --- 10.0.0.2 ping statistics --- 00:06:57.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.560 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:06:57.560 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:57.560 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:57.560 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:06:57.560 00:06:57.560 --- 10.0.0.3 ping statistics --- 00:06:57.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.560 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:06:57.560 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:57.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:57.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:06:57.560 00:06:57.560 --- 10.0.0.1 ping statistics --- 00:06:57.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.560 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:06:57.560 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:57.560 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:06:57.560 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:57.560 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:57.560 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:57.560 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:57.560 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:57.560 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:57.560 20:23:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:57.560 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:06:57.560 20:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:57.560 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:57.560 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.560 20:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=67203 00:06:57.560 20:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:57.560 20:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 67203 00:06:57.560 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 67203 ']' 00:06:57.560 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.560 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.560 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.560 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.560 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.818 [2024-07-15 20:23:19.066784] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:06:57.818 [2024-07-15 20:23:19.066883] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.818 [2024-07-15 20:23:19.206232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:57.818 [2024-07-15 20:23:19.277298] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:57.818 [2024-07-15 20:23:19.277369] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:57.818 [2024-07-15 20:23:19.277382] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:57.818 [2024-07-15 20:23:19.277392] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:57.818 [2024-07-15 20:23:19.277400] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:57.818 [2024-07-15 20:23:19.278471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.818 [2024-07-15 20:23:19.278584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.818 [2024-07-15 20:23:19.278697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.818 [2024-07-15 20:23:19.278703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:06:58.077 "poll_groups": [ 00:06:58.077 { 00:06:58.077 "admin_qpairs": 0, 00:06:58.077 "completed_nvme_io": 0, 00:06:58.077 "current_admin_qpairs": 0, 00:06:58.077 "current_io_qpairs": 0, 00:06:58.077 "io_qpairs": 0, 00:06:58.077 "name": "nvmf_tgt_poll_group_000", 00:06:58.077 "pending_bdev_io": 0, 00:06:58.077 "transports": [] 00:06:58.077 }, 00:06:58.077 { 00:06:58.077 "admin_qpairs": 0, 00:06:58.077 "completed_nvme_io": 0, 00:06:58.077 "current_admin_qpairs": 0, 00:06:58.077 "current_io_qpairs": 0, 00:06:58.077 "io_qpairs": 0, 00:06:58.077 "name": "nvmf_tgt_poll_group_001", 00:06:58.077 "pending_bdev_io": 0, 00:06:58.077 "transports": [] 00:06:58.077 }, 00:06:58.077 { 00:06:58.077 "admin_qpairs": 0, 00:06:58.077 "completed_nvme_io": 0, 00:06:58.077 "current_admin_qpairs": 0, 00:06:58.077 "current_io_qpairs": 0, 00:06:58.077 "io_qpairs": 0, 00:06:58.077 "name": "nvmf_tgt_poll_group_002", 00:06:58.077 "pending_bdev_io": 0, 00:06:58.077 "transports": [] 00:06:58.077 }, 00:06:58.077 { 00:06:58.077 "admin_qpairs": 0, 00:06:58.077 "completed_nvme_io": 0, 00:06:58.077 "current_admin_qpairs": 0, 00:06:58.077 "current_io_qpairs": 0, 00:06:58.077 "io_qpairs": 0, 00:06:58.077 "name": "nvmf_tgt_poll_group_003", 00:06:58.077 "pending_bdev_io": 0, 00:06:58.077 "transports": [] 00:06:58.077 } 00:06:58.077 ], 00:06:58.077 "tick_rate": 2200000000 00:06:58.077 }' 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.077 [2024-07-15 20:23:19.533212] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.077 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.336 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.336 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:06:58.336 "poll_groups": [ 00:06:58.336 { 00:06:58.336 "admin_qpairs": 0, 00:06:58.336 "completed_nvme_io": 0, 00:06:58.336 "current_admin_qpairs": 0, 00:06:58.336 "current_io_qpairs": 0, 00:06:58.336 "io_qpairs": 0, 00:06:58.337 "name": "nvmf_tgt_poll_group_000", 00:06:58.337 "pending_bdev_io": 0, 00:06:58.337 "transports": [ 00:06:58.337 { 00:06:58.337 "trtype": "TCP" 00:06:58.337 } 00:06:58.337 ] 00:06:58.337 }, 00:06:58.337 { 00:06:58.337 "admin_qpairs": 0, 00:06:58.337 "completed_nvme_io": 0, 00:06:58.337 "current_admin_qpairs": 0, 00:06:58.337 "current_io_qpairs": 0, 00:06:58.337 "io_qpairs": 0, 00:06:58.337 "name": "nvmf_tgt_poll_group_001", 00:06:58.337 "pending_bdev_io": 0, 00:06:58.337 "transports": [ 00:06:58.337 { 00:06:58.337 "trtype": "TCP" 00:06:58.337 } 00:06:58.337 ] 00:06:58.337 }, 00:06:58.337 { 00:06:58.337 "admin_qpairs": 0, 00:06:58.337 "completed_nvme_io": 0, 00:06:58.337 "current_admin_qpairs": 0, 00:06:58.337 "current_io_qpairs": 0, 00:06:58.337 "io_qpairs": 0, 00:06:58.337 "name": "nvmf_tgt_poll_group_002", 00:06:58.337 "pending_bdev_io": 0, 00:06:58.337 "transports": [ 00:06:58.337 { 00:06:58.337 "trtype": "TCP" 00:06:58.337 } 00:06:58.337 ] 00:06:58.337 }, 00:06:58.337 { 00:06:58.337 "admin_qpairs": 0, 00:06:58.337 "completed_nvme_io": 0, 00:06:58.337 "current_admin_qpairs": 0, 00:06:58.337 "current_io_qpairs": 0, 00:06:58.337 "io_qpairs": 0, 00:06:58.337 "name": "nvmf_tgt_poll_group_003", 00:06:58.337 "pending_bdev_io": 0, 00:06:58.337 "transports": [ 00:06:58.337 { 00:06:58.337 "trtype": "TCP" 00:06:58.337 } 00:06:58.337 ] 00:06:58.337 } 00:06:58.337 ], 00:06:58.337 "tick_rate": 2200000000 00:06:58.337 }' 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.337 Malloc1 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.337 [2024-07-15 20:23:19.722701] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -a 10.0.0.2 -s 4420 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -a 10.0.0.2 -s 4420 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -a 10.0.0.2 -s 4420 00:06:58.337 [2024-07-15 20:23:19.740919] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5' 00:06:58.337 Failed to write to /dev/nvme-fabrics: Input/output error 00:06:58.337 could not add new controller: failed to write to nvme-fabrics device 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.337 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:58.596 20:23:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:06:58.596 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:06:58.596 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:06:58.596 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:06:58.596 20:23:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:00.527 20:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:00.527 20:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:00.527 20:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:00.527 20:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:00.527 20:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:00.527 20:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:00.527 20:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:00.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:00.527 20:23:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:00.527 20:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:00.527 20:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:00.527 20:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:00.527 20:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:00.527 20:23:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:00.527 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:00.527 20:23:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:07:00.527 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.527 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.527 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.527 20:23:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:00.527 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:00.527 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:00.527 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:00.527 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.527 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:00.527 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.527 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:00.527 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:00.527 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:00.527 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:00.527 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:00.785 [2024-07-15 20:23:22.032116] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5' 00:07:00.785 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:00.785 could not add new controller: failed to write to nvme-fabrics device 00:07:00.785 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:00.785 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:00.785 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:00.785 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:00.785 20:23:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:00.785 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.785 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.785 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.785 20:23:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:00.785 20:23:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:00.785 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:00.785 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:00.785 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:00.785 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:03.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.317 [2024-07-15 20:23:24.329364] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:03.317 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:05.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.220 [2024-07-15 20:23:26.628852] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.220 20:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:05.479 20:23:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:05.479 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:05.479 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:05.479 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:05.479 20:23:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:07.384 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:07.384 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:07.384 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:07.384 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:07.384 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:07.384 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:07.384 20:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:07.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:07.384 20:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:07.384 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:07.384 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:07.384 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.642 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.643 [2024-07-15 20:23:28.936446] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.643 20:23:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:07.643 20:23:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:07.643 20:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:07.643 20:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:07.643 20:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:07.643 20:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:10.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.193 [2024-07-15 20:23:31.232179] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:10.193 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:12.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.091 [2024-07-15 20:23:33.539543] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.091 20:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:12.348 20:23:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:12.348 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:07:12.348 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:12.348 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:12.348 20:23:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:07:14.245 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:14.245 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:14.245 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:14.245 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:14.245 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:14.245 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:07:14.245 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:14.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.504 [2024-07-15 20:23:35.830647] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.504 [2024-07-15 20:23:35.878744] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.504 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.505 [2024-07-15 20:23:35.926757] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.505 [2024-07-15 20:23:35.974837] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.505 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.505 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.505 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:14.505 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.763 [2024-07-15 20:23:36.022847] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.763 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:14.763 "poll_groups": [ 00:07:14.763 { 00:07:14.763 "admin_qpairs": 2, 00:07:14.763 "completed_nvme_io": 68, 00:07:14.763 "current_admin_qpairs": 0, 00:07:14.763 "current_io_qpairs": 0, 00:07:14.763 "io_qpairs": 16, 00:07:14.763 "name": "nvmf_tgt_poll_group_000", 00:07:14.763 "pending_bdev_io": 0, 00:07:14.763 "transports": [ 00:07:14.763 { 00:07:14.763 "trtype": "TCP" 00:07:14.763 } 00:07:14.763 ] 00:07:14.763 }, 00:07:14.763 { 00:07:14.763 "admin_qpairs": 3, 00:07:14.763 "completed_nvme_io": 116, 00:07:14.763 "current_admin_qpairs": 0, 00:07:14.763 "current_io_qpairs": 0, 00:07:14.763 "io_qpairs": 17, 00:07:14.763 "name": "nvmf_tgt_poll_group_001", 00:07:14.763 "pending_bdev_io": 0, 00:07:14.763 "transports": [ 00:07:14.763 { 00:07:14.763 "trtype": "TCP" 00:07:14.763 } 00:07:14.763 ] 00:07:14.763 }, 00:07:14.763 { 00:07:14.763 "admin_qpairs": 1, 00:07:14.763 "completed_nvme_io": 118, 00:07:14.763 "current_admin_qpairs": 0, 00:07:14.763 "current_io_qpairs": 0, 00:07:14.763 "io_qpairs": 19, 00:07:14.763 "name": "nvmf_tgt_poll_group_002", 00:07:14.763 "pending_bdev_io": 0, 00:07:14.764 "transports": [ 00:07:14.764 { 00:07:14.764 "trtype": "TCP" 00:07:14.764 } 00:07:14.764 ] 00:07:14.764 }, 00:07:14.764 { 00:07:14.764 "admin_qpairs": 1, 00:07:14.764 "completed_nvme_io": 118, 00:07:14.764 "current_admin_qpairs": 0, 00:07:14.764 "current_io_qpairs": 0, 00:07:14.764 "io_qpairs": 18, 00:07:14.764 "name": "nvmf_tgt_poll_group_003", 00:07:14.764 "pending_bdev_io": 0, 00:07:14.764 "transports": [ 00:07:14.764 { 00:07:14.764 "trtype": "TCP" 00:07:14.764 } 00:07:14.764 ] 00:07:14.764 } 00:07:14.764 ], 00:07:14.764 "tick_rate": 2200000000 00:07:14.764 }' 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:14.764 rmmod nvme_tcp 00:07:14.764 rmmod nvme_fabrics 00:07:14.764 rmmod nvme_keyring 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 67203 ']' 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 67203 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 67203 ']' 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 67203 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.764 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67203 00:07:15.022 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.022 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.022 killing process with pid 67203 00:07:15.022 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67203' 00:07:15.022 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 67203 00:07:15.022 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 67203 00:07:15.022 20:23:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:15.022 20:23:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:15.022 20:23:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:15.022 20:23:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:15.022 20:23:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:15.022 20:23:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.022 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:15.022 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.022 20:23:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:15.022 00:07:15.022 real 0m18.003s 00:07:15.022 user 1m7.130s 00:07:15.022 sys 0m2.538s 00:07:15.022 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.022 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.022 ************************************ 00:07:15.022 END TEST nvmf_rpc 00:07:15.022 ************************************ 00:07:15.281 20:23:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:15.281 20:23:36 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:15.281 20:23:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:15.281 20:23:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.281 20:23:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:15.281 ************************************ 00:07:15.281 START TEST nvmf_invalid 00:07:15.281 ************************************ 00:07:15.281 20:23:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:15.281 * Looking for test storage... 00:07:15.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:15.281 20:23:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:15.281 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:15.281 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.281 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.281 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.281 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.281 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.281 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.281 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.281 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.281 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:15.282 Cannot find device "nvmf_tgt_br" 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:15.282 Cannot find device "nvmf_tgt_br2" 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:15.282 Cannot find device "nvmf_tgt_br" 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:15.282 Cannot find device "nvmf_tgt_br2" 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:15.282 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:15.541 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:15.541 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:15.541 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:07:15.541 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:15.541 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:15.541 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:07:15.541 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:15.541 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:15.541 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:15.541 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:15.541 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:15.541 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:15.541 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:15.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:07:15.542 00:07:15.542 --- 10.0.0.2 ping statistics --- 00:07:15.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.542 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:15.542 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:15.542 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:07:15.542 00:07:15.542 --- 10.0.0.3 ping statistics --- 00:07:15.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.542 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:15.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:15.542 00:07:15.542 --- 10.0.0.1 ping statistics --- 00:07:15.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.542 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=67695 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 67695 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 67695 ']' 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.542 20:23:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:15.800 [2024-07-15 20:23:37.071337] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:07:15.800 [2024-07-15 20:23:37.071446] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.800 [2024-07-15 20:23:37.214216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.800 [2024-07-15 20:23:37.289422] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.800 [2024-07-15 20:23:37.289498] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.800 [2024-07-15 20:23:37.289512] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.800 [2024-07-15 20:23:37.289522] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.800 [2024-07-15 20:23:37.289531] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.800 [2024-07-15 20:23:37.289710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.800 [2024-07-15 20:23:37.290756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.800 [2024-07-15 20:23:37.290934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.800 [2024-07-15 20:23:37.290940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.736 20:23:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.736 20:23:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:07:16.736 20:23:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:16.736 20:23:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:16.736 20:23:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:16.736 20:23:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.736 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:16.736 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode9345 00:07:17.021 [2024-07-15 20:23:38.303717] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:17.021 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/15 20:23:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode9345 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:07:17.021 request: 00:07:17.021 { 00:07:17.021 "method": "nvmf_create_subsystem", 00:07:17.021 "params": { 00:07:17.021 "nqn": "nqn.2016-06.io.spdk:cnode9345", 00:07:17.021 "tgt_name": "foobar" 00:07:17.021 } 00:07:17.021 } 00:07:17.021 Got JSON-RPC error response 00:07:17.021 GoRPCClient: error on JSON-RPC call' 00:07:17.021 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/15 20:23:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode9345 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:07:17.021 request: 00:07:17.021 { 00:07:17.021 "method": "nvmf_create_subsystem", 00:07:17.021 "params": { 00:07:17.021 "nqn": "nqn.2016-06.io.spdk:cnode9345", 00:07:17.021 "tgt_name": "foobar" 00:07:17.021 } 00:07:17.021 } 00:07:17.021 Got JSON-RPC error response 00:07:17.021 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:17.021 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:17.021 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24085 00:07:17.308 [2024-07-15 20:23:38.559999] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24085: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:17.308 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/15 20:23:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode24085 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:07:17.308 request: 00:07:17.308 { 00:07:17.308 "method": "nvmf_create_subsystem", 00:07:17.308 "params": { 00:07:17.308 "nqn": "nqn.2016-06.io.spdk:cnode24085", 00:07:17.308 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:07:17.308 } 00:07:17.308 } 00:07:17.308 Got JSON-RPC error response 00:07:17.308 GoRPCClient: error on JSON-RPC call' 00:07:17.308 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/15 20:23:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode24085 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:07:17.308 request: 00:07:17.308 { 00:07:17.308 "method": "nvmf_create_subsystem", 00:07:17.308 "params": { 00:07:17.308 "nqn": "nqn.2016-06.io.spdk:cnode24085", 00:07:17.308 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:07:17.308 } 00:07:17.308 } 00:07:17.308 Got JSON-RPC error response 00:07:17.308 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:17.308 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:17.308 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17486 00:07:17.567 [2024-07-15 20:23:38.848249] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17486: invalid model number 'SPDK_Controller' 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/15 20:23:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode17486], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:07:17.567 request: 00:07:17.567 { 00:07:17.567 "method": "nvmf_create_subsystem", 00:07:17.567 "params": { 00:07:17.567 "nqn": "nqn.2016-06.io.spdk:cnode17486", 00:07:17.567 "model_number": "SPDK_Controller\u001f" 00:07:17.567 } 00:07:17.567 } 00:07:17.567 Got JSON-RPC error response 00:07:17.567 GoRPCClient: error on JSON-RPC call' 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/15 20:23:38 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode17486], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:07:17.567 request: 00:07:17.567 { 00:07:17.567 "method": "nvmf_create_subsystem", 00:07:17.567 "params": { 00:07:17.567 "nqn": "nqn.2016-06.io.spdk:cnode17486", 00:07:17.567 "model_number": "SPDK_Controller\u001f" 00:07:17.567 } 00:07:17.567 } 00:07:17.567 Got JSON-RPC error response 00:07:17.567 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:17.567 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ~ == \- ]] 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '~hW.`A[GJsNW7Cr?}R#je' 00:07:17.568 20:23:38 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '~hW.`A[GJsNW7Cr?}R#je' nqn.2016-06.io.spdk:cnode28828 00:07:17.828 [2024-07-15 20:23:39.248636] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28828: invalid serial number '~hW.`A[GJsNW7Cr?}R#je' 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/15 20:23:39 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode28828 serial_number:~hW.`A[GJsNW7Cr?}R#je], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN ~hW.`A[GJsNW7Cr?}R#je 00:07:17.828 request: 00:07:17.828 { 00:07:17.828 "method": "nvmf_create_subsystem", 00:07:17.828 "params": { 00:07:17.828 "nqn": "nqn.2016-06.io.spdk:cnode28828", 00:07:17.828 "serial_number": "~hW.`A[GJsNW7Cr?}R#je" 00:07:17.828 } 00:07:17.828 } 00:07:17.828 Got JSON-RPC error response 00:07:17.828 GoRPCClient: error on JSON-RPC call' 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/15 20:23:39 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode28828 serial_number:~hW.`A[GJsNW7Cr?}R#je], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN ~hW.`A[GJsNW7Cr?}R#je 00:07:17.828 request: 00:07:17.828 { 00:07:17.828 "method": "nvmf_create_subsystem", 00:07:17.828 "params": { 00:07:17.828 "nqn": "nqn.2016-06.io.spdk:cnode28828", 00:07:17.828 "serial_number": "~hW.`A[GJsNW7Cr?}R#je" 00:07:17.828 } 00:07:17.828 } 00:07:17.828 Got JSON-RPC error response 00:07:17.828 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:07:17.828 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:07:18.088 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ H == \- ]] 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'H!j.1Z0QB*, .pq%P;%z2pzL'\''eF*,?tfu4/|w-=t;' 00:07:18.089 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'H!j.1Z0QB*, .pq%P;%z2pzL'\''eF*,?tfu4/|w-=t;' nqn.2016-06.io.spdk:cnode10157 00:07:18.347 [2024-07-15 20:23:39.749123] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10157: invalid model number 'H!j.1Z0QB*, .pq%P;%z2pzL'eF*,?tfu4/|w-=t;' 00:07:18.347 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/07/15 20:23:39 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:H!j.1Z0QB*, .pq%P;%z2pzL'\''eF*,?tfu4/|w-=t; nqn:nqn.2016-06.io.spdk:cnode10157], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN H!j.1Z0QB*, .pq%P;%z2pzL'\''eF*,?tfu4/|w-=t; 00:07:18.348 request: 00:07:18.348 { 00:07:18.348 "method": "nvmf_create_subsystem", 00:07:18.348 "params": { 00:07:18.348 "nqn": "nqn.2016-06.io.spdk:cnode10157", 00:07:18.348 "model_number": "H!j.1Z0QB*, .pq%P;%z2pzL'\''eF*,?tfu4/|w-=t;" 00:07:18.348 } 00:07:18.348 } 00:07:18.348 Got JSON-RPC error response 00:07:18.348 GoRPCClient: error on JSON-RPC call' 00:07:18.348 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/07/15 20:23:39 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:H!j.1Z0QB*, .pq%P;%z2pzL'eF*,?tfu4/|w-=t; nqn:nqn.2016-06.io.spdk:cnode10157], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN H!j.1Z0QB*, .pq%P;%z2pzL'eF*,?tfu4/|w-=t; 00:07:18.348 request: 00:07:18.348 { 00:07:18.348 "method": "nvmf_create_subsystem", 00:07:18.348 "params": { 00:07:18.348 "nqn": "nqn.2016-06.io.spdk:cnode10157", 00:07:18.348 "model_number": "H!j.1Z0QB*, .pq%P;%z2pzL'eF*,?tfu4/|w-=t;" 00:07:18.348 } 00:07:18.348 } 00:07:18.348 Got JSON-RPC error response 00:07:18.348 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:18.348 20:23:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:18.606 [2024-07-15 20:23:40.069447] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.864 20:23:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:19.122 20:23:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:19.122 20:23:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:19.122 20:23:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:19.122 20:23:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:19.122 20:23:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:19.380 [2024-07-15 20:23:40.716644] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:19.380 20:23:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/07/15 20:23:40 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:07:19.380 request: 00:07:19.380 { 00:07:19.380 "method": "nvmf_subsystem_remove_listener", 00:07:19.380 "params": { 00:07:19.380 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:19.380 "listen_address": { 00:07:19.380 "trtype": "tcp", 00:07:19.380 "traddr": "", 00:07:19.380 "trsvcid": "4421" 00:07:19.380 } 00:07:19.380 } 00:07:19.380 } 00:07:19.380 Got JSON-RPC error response 00:07:19.380 GoRPCClient: error on JSON-RPC call' 00:07:19.380 20:23:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/07/15 20:23:40 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:07:19.380 request: 00:07:19.380 { 00:07:19.380 "method": "nvmf_subsystem_remove_listener", 00:07:19.380 "params": { 00:07:19.380 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:19.380 "listen_address": { 00:07:19.380 "trtype": "tcp", 00:07:19.380 "traddr": "", 00:07:19.380 "trsvcid": "4421" 00:07:19.380 } 00:07:19.380 } 00:07:19.380 } 00:07:19.380 Got JSON-RPC error response 00:07:19.380 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:19.380 20:23:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25014 -i 0 00:07:19.639 [2024-07-15 20:23:41.032887] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25014: invalid cntlid range [0-65519] 00:07:19.639 20:23:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/07/15 20:23:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode25014], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:07:19.639 request: 00:07:19.639 { 00:07:19.639 "method": "nvmf_create_subsystem", 00:07:19.639 "params": { 00:07:19.639 "nqn": "nqn.2016-06.io.spdk:cnode25014", 00:07:19.639 "min_cntlid": 0 00:07:19.639 } 00:07:19.639 } 00:07:19.639 Got JSON-RPC error response 00:07:19.639 GoRPCClient: error on JSON-RPC call' 00:07:19.639 20:23:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/07/15 20:23:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode25014], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:07:19.639 request: 00:07:19.639 { 00:07:19.639 "method": "nvmf_create_subsystem", 00:07:19.639 "params": { 00:07:19.639 "nqn": "nqn.2016-06.io.spdk:cnode25014", 00:07:19.639 "min_cntlid": 0 00:07:19.639 } 00:07:19.639 } 00:07:19.639 Got JSON-RPC error response 00:07:19.639 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:19.639 20:23:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13606 -i 65520 00:07:19.896 [2024-07-15 20:23:41.345214] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13606: invalid cntlid range [65520-65519] 00:07:19.896 20:23:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/07/15 20:23:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode13606], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:07:19.896 request: 00:07:19.896 { 00:07:19.896 "method": "nvmf_create_subsystem", 00:07:19.896 "params": { 00:07:19.896 "nqn": "nqn.2016-06.io.spdk:cnode13606", 00:07:19.896 "min_cntlid": 65520 00:07:19.896 } 00:07:19.896 } 00:07:19.896 Got JSON-RPC error response 00:07:19.896 GoRPCClient: error on JSON-RPC call' 00:07:19.896 20:23:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/07/15 20:23:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode13606], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:07:19.896 request: 00:07:19.896 { 00:07:19.896 "method": "nvmf_create_subsystem", 00:07:19.896 "params": { 00:07:19.896 "nqn": "nqn.2016-06.io.spdk:cnode13606", 00:07:19.896 "min_cntlid": 65520 00:07:19.896 } 00:07:19.896 } 00:07:19.896 Got JSON-RPC error response 00:07:19.896 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:19.896 20:23:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24787 -I 0 00:07:20.168 [2024-07-15 20:23:41.625486] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24787: invalid cntlid range [1-0] 00:07:20.168 20:23:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/07/15 20:23:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode24787], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:07:20.168 request: 00:07:20.168 { 00:07:20.168 "method": "nvmf_create_subsystem", 00:07:20.168 "params": { 00:07:20.168 "nqn": "nqn.2016-06.io.spdk:cnode24787", 00:07:20.168 "max_cntlid": 0 00:07:20.168 } 00:07:20.168 } 00:07:20.168 Got JSON-RPC error response 00:07:20.168 GoRPCClient: error on JSON-RPC call' 00:07:20.168 20:23:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/07/15 20:23:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode24787], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:07:20.168 request: 00:07:20.168 { 00:07:20.168 "method": "nvmf_create_subsystem", 00:07:20.168 "params": { 00:07:20.168 "nqn": "nqn.2016-06.io.spdk:cnode24787", 00:07:20.168 "max_cntlid": 0 00:07:20.168 } 00:07:20.168 } 00:07:20.168 Got JSON-RPC error response 00:07:20.168 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:20.168 20:23:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15462 -I 65520 00:07:20.430 [2024-07-15 20:23:41.881704] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15462: invalid cntlid range [1-65520] 00:07:20.430 20:23:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/07/15 20:23:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode15462], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:07:20.430 request: 00:07:20.430 { 00:07:20.430 "method": "nvmf_create_subsystem", 00:07:20.430 "params": { 00:07:20.430 "nqn": "nqn.2016-06.io.spdk:cnode15462", 00:07:20.430 "max_cntlid": 65520 00:07:20.430 } 00:07:20.430 } 00:07:20.430 Got JSON-RPC error response 00:07:20.430 GoRPCClient: error on JSON-RPC call' 00:07:20.430 20:23:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/07/15 20:23:41 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode15462], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:07:20.430 request: 00:07:20.430 { 00:07:20.430 "method": "nvmf_create_subsystem", 00:07:20.430 "params": { 00:07:20.430 "nqn": "nqn.2016-06.io.spdk:cnode15462", 00:07:20.430 "max_cntlid": 65520 00:07:20.430 } 00:07:20.430 } 00:07:20.430 Got JSON-RPC error response 00:07:20.430 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:20.430 20:23:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14583 -i 6 -I 5 00:07:20.688 [2024-07-15 20:23:42.125894] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14583: invalid cntlid range [6-5] 00:07:20.688 20:23:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/07/15 20:23:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode14583], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:07:20.688 request: 00:07:20.688 { 00:07:20.688 "method": "nvmf_create_subsystem", 00:07:20.688 "params": { 00:07:20.688 "nqn": "nqn.2016-06.io.spdk:cnode14583", 00:07:20.688 "min_cntlid": 6, 00:07:20.688 "max_cntlid": 5 00:07:20.688 } 00:07:20.688 } 00:07:20.688 Got JSON-RPC error response 00:07:20.688 GoRPCClient: error on JSON-RPC call' 00:07:20.689 20:23:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/07/15 20:23:42 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode14583], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:07:20.689 request: 00:07:20.689 { 00:07:20.689 "method": "nvmf_create_subsystem", 00:07:20.689 "params": { 00:07:20.689 "nqn": "nqn.2016-06.io.spdk:cnode14583", 00:07:20.689 "min_cntlid": 6, 00:07:20.689 "max_cntlid": 5 00:07:20.689 } 00:07:20.689 } 00:07:20.689 Got JSON-RPC error response 00:07:20.689 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:20.689 20:23:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:20.948 { 00:07:20.948 "name": "foobar", 00:07:20.948 "method": "nvmf_delete_target", 00:07:20.948 "req_id": 1 00:07:20.948 } 00:07:20.948 Got JSON-RPC error response 00:07:20.948 response: 00:07:20.948 { 00:07:20.948 "code": -32602, 00:07:20.948 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:20.948 }' 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:20.948 { 00:07:20.948 "name": "foobar", 00:07:20.948 "method": "nvmf_delete_target", 00:07:20.948 "req_id": 1 00:07:20.948 } 00:07:20.948 Got JSON-RPC error response 00:07:20.948 response: 00:07:20.948 { 00:07:20.948 "code": -32602, 00:07:20.948 "message": "The specified target doesn't exist, cannot delete it." 00:07:20.948 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:20.948 rmmod nvme_tcp 00:07:20.948 rmmod nvme_fabrics 00:07:20.948 rmmod nvme_keyring 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 67695 ']' 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 67695 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 67695 ']' 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 67695 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67695 00:07:20.948 killing process with pid 67695 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67695' 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 67695 00:07:20.948 20:23:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 67695 00:07:21.207 20:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:21.207 20:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:21.207 20:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:21.207 20:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:21.207 20:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:21.207 20:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.207 20:23:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:21.207 20:23:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.207 20:23:42 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:21.207 ************************************ 00:07:21.207 END TEST nvmf_invalid 00:07:21.207 ************************************ 00:07:21.207 00:07:21.207 real 0m6.028s 00:07:21.207 user 0m24.575s 00:07:21.207 sys 0m1.203s 00:07:21.207 20:23:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.207 20:23:42 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:21.207 20:23:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:21.207 20:23:42 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:21.207 20:23:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:21.207 20:23:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.207 20:23:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:21.207 ************************************ 00:07:21.207 START TEST nvmf_abort 00:07:21.207 ************************************ 00:07:21.207 20:23:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:21.207 * Looking for test storage... 00:07:21.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:21.466 Cannot find device "nvmf_tgt_br" 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:21.466 Cannot find device "nvmf_tgt_br2" 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:21.466 Cannot find device "nvmf_tgt_br" 00:07:21.466 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:07:21.467 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:21.467 Cannot find device "nvmf_tgt_br2" 00:07:21.467 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:07:21.467 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:21.467 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:21.467 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:21.467 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:21.467 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:07:21.467 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:21.467 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:21.467 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:07:21.467 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:21.467 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:21.467 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:21.467 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:21.467 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:21.467 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:21.467 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:21.467 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:21.467 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:21.467 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:21.467 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:21.725 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:21.725 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:21.725 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:21.725 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:21.725 20:23:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:21.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:07:21.725 00:07:21.725 --- 10.0.0.2 ping statistics --- 00:07:21.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.725 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:21.725 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:21.725 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:07:21.725 00:07:21.725 --- 10.0.0.3 ping statistics --- 00:07:21.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.725 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:21.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:07:21.725 00:07:21.725 --- 10.0.0.1 ping statistics --- 00:07:21.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.725 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=68207 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 68207 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 68207 ']' 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.725 20:23:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:21.725 [2024-07-15 20:23:43.158206] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:07:21.725 [2024-07-15 20:23:43.158303] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.982 [2024-07-15 20:23:43.295027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.982 [2024-07-15 20:23:43.364726] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.982 [2024-07-15 20:23:43.365183] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.982 [2024-07-15 20:23:43.365478] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.982 [2024-07-15 20:23:43.365782] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.982 [2024-07-15 20:23:43.366034] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.982 [2024-07-15 20:23:43.366449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.982 [2024-07-15 20:23:43.366583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.982 [2024-07-15 20:23:43.366591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.912 [2024-07-15 20:23:44.202534] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.912 Malloc0 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.912 Delay0 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.912 [2024-07-15 20:23:44.268095] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.912 20:23:44 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:23.170 [2024-07-15 20:23:44.442739] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:25.069 Initializing NVMe Controllers 00:07:25.069 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:25.069 controller IO queue size 128 less than required 00:07:25.069 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:25.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:25.069 Initialization complete. Launching workers. 00:07:25.069 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 25232 00:07:25.069 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 25293, failed to submit 62 00:07:25.069 success 25236, unsuccess 57, failed 0 00:07:25.069 20:23:46 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:25.069 20:23:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.069 20:23:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.069 20:23:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.069 20:23:46 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:25.069 20:23:46 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:25.069 20:23:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:25.069 20:23:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:25.325 20:23:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:25.325 20:23:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:25.325 20:23:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:25.325 20:23:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:25.325 rmmod nvme_tcp 00:07:25.325 rmmod nvme_fabrics 00:07:25.325 rmmod nvme_keyring 00:07:25.325 20:23:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:25.325 20:23:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:25.325 20:23:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:25.325 20:23:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 68207 ']' 00:07:25.325 20:23:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 68207 00:07:25.325 20:23:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 68207 ']' 00:07:25.325 20:23:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 68207 00:07:25.325 20:23:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:07:25.325 20:23:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:25.325 20:23:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68207 00:07:25.325 20:23:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:25.325 killing process with pid 68207 00:07:25.325 20:23:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:25.325 20:23:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68207' 00:07:25.325 20:23:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 68207 00:07:25.325 20:23:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 68207 00:07:25.582 20:23:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:25.582 20:23:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:25.582 20:23:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:25.582 20:23:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:25.582 20:23:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:25.582 20:23:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.582 20:23:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:25.582 20:23:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.582 20:23:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:25.582 00:07:25.582 real 0m4.292s 00:07:25.582 user 0m12.406s 00:07:25.582 sys 0m0.997s 00:07:25.582 20:23:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.582 20:23:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:25.582 ************************************ 00:07:25.582 END TEST nvmf_abort 00:07:25.582 ************************************ 00:07:25.582 20:23:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:25.582 20:23:46 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:25.582 20:23:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:25.582 20:23:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.582 20:23:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:25.582 ************************************ 00:07:25.582 START TEST nvmf_ns_hotplug_stress 00:07:25.582 ************************************ 00:07:25.582 20:23:46 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:25.582 * Looking for test storage... 00:07:25.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:25.582 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:25.840 Cannot find device "nvmf_tgt_br" 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:25.840 Cannot find device "nvmf_tgt_br2" 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:25.840 Cannot find device "nvmf_tgt_br" 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:25.840 Cannot find device "nvmf_tgt_br2" 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:25.840 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:25.840 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:25.840 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:26.100 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:26.100 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:26.100 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:26.100 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:26.100 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:26.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:26.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:07:26.100 00:07:26.100 --- 10.0.0.2 ping statistics --- 00:07:26.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.100 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:07:26.100 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:26.100 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:26.100 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:07:26.100 00:07:26.100 --- 10.0.0.3 ping statistics --- 00:07:26.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.100 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:07:26.100 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:26.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:26.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:07:26.100 00:07:26.100 --- 10.0.0.1 ping statistics --- 00:07:26.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:26.100 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:07:26.100 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:26.100 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:07:26.100 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:26.100 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:26.100 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:26.101 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:26.101 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:26.101 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:26.101 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:26.101 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:26.101 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:26.101 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:26.101 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:26.101 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:26.101 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=68473 00:07:26.101 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 68473 00:07:26.101 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 68473 ']' 00:07:26.101 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.101 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.101 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.101 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.101 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:26.101 [2024-07-15 20:23:47.472510] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:07:26.101 [2024-07-15 20:23:47.472592] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.359 [2024-07-15 20:23:47.608621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:26.359 [2024-07-15 20:23:47.697384] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.359 [2024-07-15 20:23:47.697462] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.359 [2024-07-15 20:23:47.697484] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.359 [2024-07-15 20:23:47.697499] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.359 [2024-07-15 20:23:47.697512] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.359 [2024-07-15 20:23:47.697670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.359 [2024-07-15 20:23:47.697854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.359 [2024-07-15 20:23:47.697886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.359 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.359 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:07:26.359 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:26.359 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:26.359 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:26.359 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.359 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:26.359 20:23:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:26.617 [2024-07-15 20:23:48.082575] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.617 20:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:27.179 20:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:27.436 [2024-07-15 20:23:48.708635] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:27.436 20:23:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:27.691 20:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:27.947 Malloc0 00:07:27.947 20:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:28.203 Delay0 00:07:28.203 20:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.459 20:23:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:28.716 NULL1 00:07:28.716 20:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:28.973 20:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:28.973 20:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=68596 00:07:28.973 20:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:28.973 20:23:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.347 Read completed with error (sct=0, sc=11) 00:07:30.347 20:23:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.605 20:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:30.605 20:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:30.874 true 00:07:30.874 20:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:30.874 20:23:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.833 20:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.833 20:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:31.833 20:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:32.090 true 00:07:32.346 20:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:32.346 20:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.346 20:23:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.912 20:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:32.912 20:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:32.912 true 00:07:32.912 20:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:32.912 20:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.171 20:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.428 20:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:33.428 20:23:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:33.686 true 00:07:33.686 20:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:33.686 20:23:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.621 20:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.879 20:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:34.879 20:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:35.138 true 00:07:35.138 20:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:35.138 20:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.397 20:23:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.655 20:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:35.655 20:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:35.914 true 00:07:35.914 20:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:35.914 20:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.173 20:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.431 20:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:36.431 20:23:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:36.689 true 00:07:36.689 20:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:36.689 20:23:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.623 20:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.881 20:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:37.881 20:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:38.140 true 00:07:38.140 20:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:38.140 20:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.398 20:23:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.657 20:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:38.657 20:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:38.916 true 00:07:38.916 20:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:38.916 20:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.236 20:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.509 20:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:39.509 20:24:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:39.767 true 00:07:39.767 20:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:39.767 20:24:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.703 20:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.961 20:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:40.961 20:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:41.219 true 00:07:41.219 20:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:41.219 20:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.477 20:24:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.736 20:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:41.736 20:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:41.993 true 00:07:41.993 20:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:41.993 20:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.249 20:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.506 20:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:42.506 20:24:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:42.764 true 00:07:42.764 20:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:42.764 20:24:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.699 20:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.957 20:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:43.957 20:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:44.215 true 00:07:44.215 20:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:44.215 20:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.472 20:24:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.731 20:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:44.731 20:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:44.989 true 00:07:44.989 20:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:44.989 20:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.247 20:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.505 20:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:45.505 20:24:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:46.072 true 00:07:46.072 20:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:46.072 20:24:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.638 20:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.898 20:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:46.898 20:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:47.158 true 00:07:47.158 20:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:47.158 20:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.418 20:24:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.984 20:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:47.984 20:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:47.984 true 00:07:47.984 20:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:47.984 20:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.549 20:24:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.549 20:24:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:48.549 20:24:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:48.806 true 00:07:48.806 20:24:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:48.806 20:24:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.738 20:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.996 20:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:49.996 20:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:50.253 true 00:07:50.253 20:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:50.253 20:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.511 20:24:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.768 20:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:50.768 20:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:51.026 true 00:07:51.026 20:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:51.026 20:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.593 20:24:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.593 20:24:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:51.593 20:24:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:51.851 true 00:07:52.108 20:24:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:52.109 20:24:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.674 20:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.932 20:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:52.932 20:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:53.191 true 00:07:53.191 20:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:53.191 20:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.758 20:24:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.017 20:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:54.017 20:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:54.275 true 00:07:54.275 20:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:54.275 20:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.533 20:24:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.795 20:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:54.795 20:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:55.053 true 00:07:55.053 20:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:55.053 20:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.310 20:24:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.568 20:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:55.568 20:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:55.826 true 00:07:55.826 20:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:55.826 20:24:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.759 20:24:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:57.017 20:24:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:57.017 20:24:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:57.584 true 00:07:57.584 20:24:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:57.584 20:24:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.584 20:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.843 20:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:57.843 20:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:58.101 true 00:07:58.101 20:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:58.101 20:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.359 20:24:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.617 20:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:58.617 20:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:58.874 true 00:07:58.874 20:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:07:58.874 20:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.806 Initializing NVMe Controllers 00:07:59.806 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:59.806 Controller IO queue size 128, less than required. 00:07:59.806 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:59.806 Controller IO queue size 128, less than required. 00:07:59.806 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:59.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:59.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:59.806 Initialization complete. Launching workers. 00:07:59.806 ======================================================== 00:07:59.806 Latency(us) 00:07:59.806 Device Information : IOPS MiB/s Average min max 00:07:59.806 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 381.27 0.19 134396.69 3455.81 1087355.32 00:07:59.806 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7748.43 3.78 16519.60 4100.14 667331.07 00:07:59.806 ======================================================== 00:07:59.806 Total : 8129.70 3.97 22047.80 3455.81 1087355.32 00:07:59.806 00:07:59.806 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.064 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:08:00.064 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:08:00.322 true 00:08:00.322 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 68596 00:08:00.322 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (68596) - No such process 00:08:00.322 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 68596 00:08:00.322 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.580 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:00.837 20:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:00.837 20:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:00.837 20:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:00.837 20:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.837 20:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:01.095 null0 00:08:01.095 20:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:01.095 20:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:01.095 20:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:01.353 null1 00:08:01.353 20:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:01.353 20:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:01.353 20:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:01.611 null2 00:08:01.611 20:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:01.611 20:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:01.611 20:24:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:01.868 null3 00:08:01.868 20:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:01.868 20:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:01.868 20:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:02.126 null4 00:08:02.126 20:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:02.126 20:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.126 20:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:02.384 null5 00:08:02.384 20:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:02.384 20:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.384 20:24:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:02.641 null6 00:08:02.641 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:02.641 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.641 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:02.899 null7 00:08:03.157 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:03.157 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 69659 69660 69662 69665 69667 69668 69670 69672 00:08:03.158 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.416 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.416 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.416 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.416 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.416 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.416 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.416 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.416 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.416 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.416 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.674 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.674 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.674 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.674 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.674 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.674 20:24:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.674 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.674 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.674 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.674 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.674 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.674 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.674 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.674 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.674 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.674 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.674 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.674 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.674 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.674 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.674 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.932 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.932 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.932 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.932 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.932 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.932 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.932 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.190 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.191 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.449 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.449 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.449 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.449 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.449 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.449 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.449 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.449 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.449 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.708 20:24:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.708 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.708 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.708 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.708 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.708 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.708 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.708 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.708 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.708 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.708 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.708 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.708 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.708 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.708 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.708 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.708 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.708 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.708 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.708 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.968 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.968 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.968 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.968 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.968 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.968 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.968 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.968 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.968 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.226 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.226 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.226 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.226 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.226 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.226 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.226 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.226 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.226 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.226 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.226 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.484 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.484 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.484 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.484 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.484 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.484 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.484 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.484 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.484 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.484 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.484 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.484 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.484 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.484 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.484 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.484 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.484 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.484 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.484 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.484 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.741 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.741 20:24:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.741 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.741 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.741 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.741 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.741 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.741 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.741 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.741 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.741 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.741 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.741 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.741 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.741 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.000 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.000 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.000 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.000 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.000 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.000 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.000 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.000 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.000 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.000 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.000 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.000 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.000 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.000 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.000 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.000 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.000 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.000 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.259 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.259 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.259 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.259 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.259 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.259 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.259 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.259 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.259 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.259 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.259 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.517 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.517 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.517 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.517 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.517 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.517 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.517 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.517 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.517 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.517 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.517 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.517 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.517 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.517 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.517 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.517 20:24:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.517 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.517 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.517 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.517 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.775 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.775 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.775 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.775 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.775 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.775 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.775 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.775 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.775 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.775 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.775 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.031 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.031 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.031 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.031 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:07.031 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.031 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.031 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.031 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.031 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.032 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.032 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.032 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.032 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.032 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.288 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.288 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.288 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.288 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.288 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.288 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.288 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:07.288 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.288 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.288 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.288 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.288 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.288 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:07.288 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:07.545 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:07.545 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.545 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.545 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:07.545 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:07.545 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.545 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.545 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:07.545 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.545 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.546 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:07.546 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:07.546 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.546 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.546 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:07.803 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.803 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.803 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:07.803 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.803 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.803 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:07.803 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:07.803 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.803 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.803 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:07.803 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:07.803 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.803 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.803 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:07.803 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.060 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:08.060 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:08.060 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.060 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.060 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:08.060 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.060 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.060 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.060 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.060 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:08.060 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:08.318 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.318 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.318 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:08.318 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.318 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.318 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:08.318 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.318 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.318 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:08.318 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.318 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.318 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:08.318 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.318 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.318 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:08.318 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:08.318 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.318 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.576 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.576 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:08.576 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:08.576 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:08.576 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:08.576 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:08.576 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.576 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.833 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.833 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.833 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.833 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.833 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.833 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.834 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.834 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.834 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.834 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.834 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:08.834 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:08.834 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:08.834 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:08.834 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:08.834 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:08.834 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:08.834 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:08.834 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.834 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:08.834 rmmod nvme_tcp 00:08:09.092 rmmod nvme_fabrics 00:08:09.092 rmmod nvme_keyring 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 68473 ']' 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 68473 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 68473 ']' 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 68473 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68473 00:08:09.092 killing process with pid 68473 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68473' 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 68473 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 68473 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:09.092 00:08:09.092 real 0m43.623s 00:08:09.092 user 3m34.087s 00:08:09.092 sys 0m12.698s 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.092 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:09.351 ************************************ 00:08:09.351 END TEST nvmf_ns_hotplug_stress 00:08:09.351 ************************************ 00:08:09.351 20:24:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:09.351 20:24:30 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:09.351 20:24:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:09.351 20:24:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.351 20:24:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:09.351 ************************************ 00:08:09.351 START TEST nvmf_connect_stress 00:08:09.351 ************************************ 00:08:09.351 20:24:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:09.351 * Looking for test storage... 00:08:09.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:09.351 20:24:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:09.351 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:09.351 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.351 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.351 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.351 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.351 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.351 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.351 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:09.352 Cannot find device "nvmf_tgt_br" 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:09.352 Cannot find device "nvmf_tgt_br2" 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:09.352 Cannot find device "nvmf_tgt_br" 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:09.352 Cannot find device "nvmf_tgt_br2" 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:08:09.352 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:09.611 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:09.611 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:09.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.611 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:08:09.611 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:09.611 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:09.611 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:08:09.611 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:09.611 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:09.611 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:09.611 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:09.611 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:09.611 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:09.611 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:09.611 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:09.611 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:09.611 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:09.611 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:09.611 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:09.611 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:09.611 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:09.611 20:24:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:09.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:08:09.611 00:08:09.611 --- 10.0.0.2 ping statistics --- 00:08:09.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.611 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:09.611 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:09.611 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:08:09.611 00:08:09.611 --- 10.0.0.3 ping statistics --- 00:08:09.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.611 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:09.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:09.611 00:08:09.611 --- 10.0.0.1 ping statistics --- 00:08:09.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.611 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:09.611 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:09.870 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:09.870 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=70973 00:08:09.870 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 70973 00:08:09.870 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 70973 ']' 00:08:09.870 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.870 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.870 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.870 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.870 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:09.870 [2024-07-15 20:24:31.170229] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:08:09.870 [2024-07-15 20:24:31.170319] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.870 [2024-07-15 20:24:31.310778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:10.129 [2024-07-15 20:24:31.380817] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.129 [2024-07-15 20:24:31.380887] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.129 [2024-07-15 20:24:31.380901] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.129 [2024-07-15 20:24:31.380910] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.129 [2024-07-15 20:24:31.380920] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.129 [2024-07-15 20:24:31.381098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.129 [2024-07-15 20:24:31.381247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.129 [2024-07-15 20:24:31.381253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:10.129 [2024-07-15 20:24:31.514406] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:10.129 [2024-07-15 20:24:31.534521] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:10.129 NULL1 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=71012 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.129 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:10.696 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.696 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:10.696 20:24:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:10.696 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.696 20:24:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:10.955 20:24:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.955 20:24:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:10.955 20:24:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:10.955 20:24:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.955 20:24:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:11.213 20:24:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.213 20:24:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:11.213 20:24:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:11.213 20:24:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.213 20:24:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:11.472 20:24:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.472 20:24:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:11.472 20:24:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:11.472 20:24:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.472 20:24:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:12.038 20:24:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.038 20:24:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:12.038 20:24:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:12.038 20:24:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.038 20:24:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:12.295 20:24:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.295 20:24:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:12.295 20:24:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:12.295 20:24:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.295 20:24:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:12.553 20:24:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.553 20:24:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:12.553 20:24:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:12.553 20:24:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.553 20:24:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:12.811 20:24:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.811 20:24:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:12.811 20:24:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:12.811 20:24:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.811 20:24:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:13.068 20:24:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.068 20:24:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:13.068 20:24:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:13.069 20:24:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.069 20:24:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:13.634 20:24:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.634 20:24:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:13.634 20:24:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:13.634 20:24:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.634 20:24:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:13.904 20:24:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.904 20:24:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:13.904 20:24:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:13.904 20:24:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.904 20:24:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:14.176 20:24:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.176 20:24:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:14.177 20:24:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:14.177 20:24:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.177 20:24:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:14.434 20:24:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.434 20:24:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:14.434 20:24:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:14.434 20:24:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.434 20:24:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:14.692 20:24:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:14.692 20:24:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:14.692 20:24:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:14.692 20:24:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:14.692 20:24:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:15.257 20:24:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.257 20:24:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:15.257 20:24:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:15.257 20:24:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.257 20:24:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:15.515 20:24:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.515 20:24:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:15.515 20:24:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:15.515 20:24:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.515 20:24:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:15.773 20:24:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.773 20:24:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:15.773 20:24:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:15.773 20:24:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.773 20:24:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:16.031 20:24:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.031 20:24:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:16.031 20:24:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:16.031 20:24:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.031 20:24:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:16.289 20:24:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.289 20:24:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:16.289 20:24:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:16.289 20:24:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.289 20:24:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:16.855 20:24:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.855 20:24:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:16.855 20:24:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:16.855 20:24:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.855 20:24:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:17.113 20:24:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.113 20:24:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:17.113 20:24:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:17.113 20:24:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.113 20:24:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:17.371 20:24:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.371 20:24:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:17.371 20:24:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:17.371 20:24:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.371 20:24:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:17.628 20:24:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.628 20:24:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:17.628 20:24:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:17.628 20:24:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.628 20:24:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:17.885 20:24:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.885 20:24:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:17.885 20:24:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:17.885 20:24:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.885 20:24:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:18.450 20:24:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.450 20:24:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:18.450 20:24:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:18.450 20:24:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.450 20:24:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:18.707 20:24:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.707 20:24:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:18.707 20:24:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:18.707 20:24:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.707 20:24:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:18.965 20:24:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.965 20:24:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:18.965 20:24:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:18.965 20:24:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.965 20:24:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:19.223 20:24:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.223 20:24:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:19.223 20:24:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:19.223 20:24:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.223 20:24:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:19.480 20:24:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.480 20:24:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:19.480 20:24:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:19.480 20:24:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.481 20:24:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.048 20:24:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.048 20:24:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:20.048 20:24:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:20.048 20:24:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.048 20:24:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.308 20:24:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.308 20:24:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:20.308 20:24:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:20.308 20:24:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.308 20:24:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.308 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:20.567 20:24:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.567 20:24:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 71012 00:08:20.567 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (71012) - No such process 00:08:20.567 20:24:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 71012 00:08:20.567 20:24:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:08:20.567 20:24:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:20.567 20:24:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:20.567 20:24:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:20.567 20:24:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:20.567 20:24:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:20.567 20:24:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:20.567 20:24:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:20.567 20:24:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:20.567 rmmod nvme_tcp 00:08:20.567 rmmod nvme_fabrics 00:08:20.567 rmmod nvme_keyring 00:08:20.567 20:24:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:20.567 20:24:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:20.567 20:24:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:20.567 20:24:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 70973 ']' 00:08:20.567 20:24:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 70973 00:08:20.567 20:24:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 70973 ']' 00:08:20.567 20:24:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 70973 00:08:20.567 20:24:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:08:20.567 20:24:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:20.567 20:24:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 70973 00:08:20.567 killing process with pid 70973 00:08:20.567 20:24:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:20.567 20:24:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:20.567 20:24:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 70973' 00:08:20.567 20:24:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 70973 00:08:20.567 20:24:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 70973 00:08:20.825 20:24:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:20.825 20:24:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:20.825 20:24:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:20.825 20:24:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:20.825 20:24:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:20.825 20:24:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.825 20:24:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:20.825 20:24:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.825 20:24:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:20.825 00:08:20.825 real 0m11.582s 00:08:20.825 user 0m38.759s 00:08:20.825 sys 0m3.192s 00:08:20.825 20:24:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:20.825 20:24:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:20.825 ************************************ 00:08:20.825 END TEST nvmf_connect_stress 00:08:20.825 ************************************ 00:08:20.825 20:24:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:20.825 20:24:42 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:20.825 20:24:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:20.825 20:24:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.825 20:24:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:20.825 ************************************ 00:08:20.825 START TEST nvmf_fused_ordering 00:08:20.825 ************************************ 00:08:20.825 20:24:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:21.084 * Looking for test storage... 00:08:21.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:21.084 20:24:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:21.085 Cannot find device "nvmf_tgt_br" 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:21.085 Cannot find device "nvmf_tgt_br2" 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:21.085 Cannot find device "nvmf_tgt_br" 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:21.085 Cannot find device "nvmf_tgt_br2" 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:21.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:21.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:21.085 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:21.344 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:21.344 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:21.344 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:21.344 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:21.344 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:21.344 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:21.344 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:21.344 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:21.344 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:21.344 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:21.344 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:21.344 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:21.344 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:21.344 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:21.344 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:21.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:08:21.345 00:08:21.345 --- 10.0.0.2 ping statistics --- 00:08:21.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.345 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:21.345 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:21.345 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:08:21.345 00:08:21.345 --- 10.0.0.3 ping statistics --- 00:08:21.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.345 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:21.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:21.345 00:08:21.345 --- 10.0.0.1 ping statistics --- 00:08:21.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.345 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=71334 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 71334 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 71334 ']' 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.345 20:24:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:21.345 [2024-07-15 20:24:42.798016] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:08:21.345 [2024-07-15 20:24:42.798115] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.603 [2024-07-15 20:24:42.938126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.603 [2024-07-15 20:24:42.996862] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.603 [2024-07-15 20:24:42.996925] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.603 [2024-07-15 20:24:42.996936] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.603 [2024-07-15 20:24:42.996945] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.603 [2024-07-15 20:24:42.996952] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.603 [2024-07-15 20:24:42.996982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.603 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:21.603 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:08:21.603 20:24:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:21.603 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:21.603 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:21.860 20:24:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.860 20:24:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:21.860 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.860 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:21.860 [2024-07-15 20:24:43.123527] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.860 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.860 20:24:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:21.860 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.860 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:21.860 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.860 20:24:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:21.860 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.860 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:21.861 [2024-07-15 20:24:43.139592] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.861 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.861 20:24:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:21.861 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.861 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:21.861 NULL1 00:08:21.861 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.861 20:24:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:21.861 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.861 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:21.861 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.861 20:24:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:21.861 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:21.861 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:21.861 20:24:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:21.861 20:24:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:21.861 [2024-07-15 20:24:43.199859] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:08:21.861 [2024-07-15 20:24:43.199964] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71370 ] 00:08:22.429 Attached to nqn.2016-06.io.spdk:cnode1 00:08:22.429 Namespace ID: 1 size: 1GB 00:08:22.429 fused_ordering(0) 00:08:22.429 fused_ordering(1) 00:08:22.429 fused_ordering(2) 00:08:22.429 fused_ordering(3) 00:08:22.429 fused_ordering(4) 00:08:22.429 fused_ordering(5) 00:08:22.429 fused_ordering(6) 00:08:22.429 fused_ordering(7) 00:08:22.429 fused_ordering(8) 00:08:22.429 fused_ordering(9) 00:08:22.429 fused_ordering(10) 00:08:22.429 fused_ordering(11) 00:08:22.429 fused_ordering(12) 00:08:22.429 fused_ordering(13) 00:08:22.429 fused_ordering(14) 00:08:22.429 fused_ordering(15) 00:08:22.429 fused_ordering(16) 00:08:22.429 fused_ordering(17) 00:08:22.429 fused_ordering(18) 00:08:22.429 fused_ordering(19) 00:08:22.429 fused_ordering(20) 00:08:22.429 fused_ordering(21) 00:08:22.429 fused_ordering(22) 00:08:22.429 fused_ordering(23) 00:08:22.429 fused_ordering(24) 00:08:22.429 fused_ordering(25) 00:08:22.429 fused_ordering(26) 00:08:22.429 fused_ordering(27) 00:08:22.429 fused_ordering(28) 00:08:22.429 fused_ordering(29) 00:08:22.429 fused_ordering(30) 00:08:22.429 fused_ordering(31) 00:08:22.429 fused_ordering(32) 00:08:22.429 fused_ordering(33) 00:08:22.429 fused_ordering(34) 00:08:22.429 fused_ordering(35) 00:08:22.429 fused_ordering(36) 00:08:22.429 fused_ordering(37) 00:08:22.429 fused_ordering(38) 00:08:22.429 fused_ordering(39) 00:08:22.429 fused_ordering(40) 00:08:22.429 fused_ordering(41) 00:08:22.429 fused_ordering(42) 00:08:22.429 fused_ordering(43) 00:08:22.430 fused_ordering(44) 00:08:22.430 fused_ordering(45) 00:08:22.430 fused_ordering(46) 00:08:22.430 fused_ordering(47) 00:08:22.430 fused_ordering(48) 00:08:22.430 fused_ordering(49) 00:08:22.430 fused_ordering(50) 00:08:22.430 fused_ordering(51) 00:08:22.430 fused_ordering(52) 00:08:22.430 fused_ordering(53) 00:08:22.430 fused_ordering(54) 00:08:22.430 fused_ordering(55) 00:08:22.430 fused_ordering(56) 00:08:22.430 fused_ordering(57) 00:08:22.430 fused_ordering(58) 00:08:22.430 fused_ordering(59) 00:08:22.430 fused_ordering(60) 00:08:22.430 fused_ordering(61) 00:08:22.430 fused_ordering(62) 00:08:22.430 fused_ordering(63) 00:08:22.430 fused_ordering(64) 00:08:22.430 fused_ordering(65) 00:08:22.430 fused_ordering(66) 00:08:22.430 fused_ordering(67) 00:08:22.430 fused_ordering(68) 00:08:22.430 fused_ordering(69) 00:08:22.430 fused_ordering(70) 00:08:22.430 fused_ordering(71) 00:08:22.430 fused_ordering(72) 00:08:22.430 fused_ordering(73) 00:08:22.430 fused_ordering(74) 00:08:22.430 fused_ordering(75) 00:08:22.430 fused_ordering(76) 00:08:22.430 fused_ordering(77) 00:08:22.430 fused_ordering(78) 00:08:22.430 fused_ordering(79) 00:08:22.430 fused_ordering(80) 00:08:22.430 fused_ordering(81) 00:08:22.430 fused_ordering(82) 00:08:22.430 fused_ordering(83) 00:08:22.430 fused_ordering(84) 00:08:22.430 fused_ordering(85) 00:08:22.430 fused_ordering(86) 00:08:22.430 fused_ordering(87) 00:08:22.430 fused_ordering(88) 00:08:22.430 fused_ordering(89) 00:08:22.430 fused_ordering(90) 00:08:22.430 fused_ordering(91) 00:08:22.430 fused_ordering(92) 00:08:22.430 fused_ordering(93) 00:08:22.430 fused_ordering(94) 00:08:22.430 fused_ordering(95) 00:08:22.430 fused_ordering(96) 00:08:22.430 fused_ordering(97) 00:08:22.430 fused_ordering(98) 00:08:22.430 fused_ordering(99) 00:08:22.430 fused_ordering(100) 00:08:22.430 fused_ordering(101) 00:08:22.430 fused_ordering(102) 00:08:22.430 fused_ordering(103) 00:08:22.430 fused_ordering(104) 00:08:22.430 fused_ordering(105) 00:08:22.430 fused_ordering(106) 00:08:22.430 fused_ordering(107) 00:08:22.430 fused_ordering(108) 00:08:22.430 fused_ordering(109) 00:08:22.430 fused_ordering(110) 00:08:22.430 fused_ordering(111) 00:08:22.430 fused_ordering(112) 00:08:22.430 fused_ordering(113) 00:08:22.430 fused_ordering(114) 00:08:22.430 fused_ordering(115) 00:08:22.430 fused_ordering(116) 00:08:22.430 fused_ordering(117) 00:08:22.430 fused_ordering(118) 00:08:22.430 fused_ordering(119) 00:08:22.430 fused_ordering(120) 00:08:22.430 fused_ordering(121) 00:08:22.430 fused_ordering(122) 00:08:22.430 fused_ordering(123) 00:08:22.430 fused_ordering(124) 00:08:22.430 fused_ordering(125) 00:08:22.430 fused_ordering(126) 00:08:22.430 fused_ordering(127) 00:08:22.430 fused_ordering(128) 00:08:22.430 fused_ordering(129) 00:08:22.430 fused_ordering(130) 00:08:22.430 fused_ordering(131) 00:08:22.430 fused_ordering(132) 00:08:22.430 fused_ordering(133) 00:08:22.430 fused_ordering(134) 00:08:22.430 fused_ordering(135) 00:08:22.430 fused_ordering(136) 00:08:22.430 fused_ordering(137) 00:08:22.430 fused_ordering(138) 00:08:22.430 fused_ordering(139) 00:08:22.430 fused_ordering(140) 00:08:22.430 fused_ordering(141) 00:08:22.430 fused_ordering(142) 00:08:22.430 fused_ordering(143) 00:08:22.430 fused_ordering(144) 00:08:22.430 fused_ordering(145) 00:08:22.430 fused_ordering(146) 00:08:22.430 fused_ordering(147) 00:08:22.430 fused_ordering(148) 00:08:22.430 fused_ordering(149) 00:08:22.430 fused_ordering(150) 00:08:22.430 fused_ordering(151) 00:08:22.430 fused_ordering(152) 00:08:22.430 fused_ordering(153) 00:08:22.430 fused_ordering(154) 00:08:22.430 fused_ordering(155) 00:08:22.430 fused_ordering(156) 00:08:22.430 fused_ordering(157) 00:08:22.430 fused_ordering(158) 00:08:22.430 fused_ordering(159) 00:08:22.430 fused_ordering(160) 00:08:22.430 fused_ordering(161) 00:08:22.430 fused_ordering(162) 00:08:22.430 fused_ordering(163) 00:08:22.430 fused_ordering(164) 00:08:22.430 fused_ordering(165) 00:08:22.430 fused_ordering(166) 00:08:22.430 fused_ordering(167) 00:08:22.430 fused_ordering(168) 00:08:22.430 fused_ordering(169) 00:08:22.430 fused_ordering(170) 00:08:22.430 fused_ordering(171) 00:08:22.430 fused_ordering(172) 00:08:22.430 fused_ordering(173) 00:08:22.430 fused_ordering(174) 00:08:22.430 fused_ordering(175) 00:08:22.430 fused_ordering(176) 00:08:22.430 fused_ordering(177) 00:08:22.430 fused_ordering(178) 00:08:22.430 fused_ordering(179) 00:08:22.430 fused_ordering(180) 00:08:22.430 fused_ordering(181) 00:08:22.430 fused_ordering(182) 00:08:22.430 fused_ordering(183) 00:08:22.430 fused_ordering(184) 00:08:22.430 fused_ordering(185) 00:08:22.430 fused_ordering(186) 00:08:22.430 fused_ordering(187) 00:08:22.430 fused_ordering(188) 00:08:22.430 fused_ordering(189) 00:08:22.430 fused_ordering(190) 00:08:22.430 fused_ordering(191) 00:08:22.430 fused_ordering(192) 00:08:22.430 fused_ordering(193) 00:08:22.430 fused_ordering(194) 00:08:22.430 fused_ordering(195) 00:08:22.430 fused_ordering(196) 00:08:22.430 fused_ordering(197) 00:08:22.430 fused_ordering(198) 00:08:22.430 fused_ordering(199) 00:08:22.430 fused_ordering(200) 00:08:22.430 fused_ordering(201) 00:08:22.430 fused_ordering(202) 00:08:22.430 fused_ordering(203) 00:08:22.430 fused_ordering(204) 00:08:22.430 fused_ordering(205) 00:08:22.688 fused_ordering(206) 00:08:22.688 fused_ordering(207) 00:08:22.688 fused_ordering(208) 00:08:22.688 fused_ordering(209) 00:08:22.688 fused_ordering(210) 00:08:22.688 fused_ordering(211) 00:08:22.688 fused_ordering(212) 00:08:22.688 fused_ordering(213) 00:08:22.688 fused_ordering(214) 00:08:22.688 fused_ordering(215) 00:08:22.688 fused_ordering(216) 00:08:22.688 fused_ordering(217) 00:08:22.688 fused_ordering(218) 00:08:22.688 fused_ordering(219) 00:08:22.688 fused_ordering(220) 00:08:22.688 fused_ordering(221) 00:08:22.688 fused_ordering(222) 00:08:22.688 fused_ordering(223) 00:08:22.688 fused_ordering(224) 00:08:22.688 fused_ordering(225) 00:08:22.688 fused_ordering(226) 00:08:22.688 fused_ordering(227) 00:08:22.688 fused_ordering(228) 00:08:22.688 fused_ordering(229) 00:08:22.688 fused_ordering(230) 00:08:22.688 fused_ordering(231) 00:08:22.688 fused_ordering(232) 00:08:22.688 fused_ordering(233) 00:08:22.688 fused_ordering(234) 00:08:22.688 fused_ordering(235) 00:08:22.688 fused_ordering(236) 00:08:22.688 fused_ordering(237) 00:08:22.688 fused_ordering(238) 00:08:22.688 fused_ordering(239) 00:08:22.688 fused_ordering(240) 00:08:22.688 fused_ordering(241) 00:08:22.688 fused_ordering(242) 00:08:22.688 fused_ordering(243) 00:08:22.688 fused_ordering(244) 00:08:22.688 fused_ordering(245) 00:08:22.688 fused_ordering(246) 00:08:22.688 fused_ordering(247) 00:08:22.688 fused_ordering(248) 00:08:22.688 fused_ordering(249) 00:08:22.688 fused_ordering(250) 00:08:22.688 fused_ordering(251) 00:08:22.688 fused_ordering(252) 00:08:22.688 fused_ordering(253) 00:08:22.688 fused_ordering(254) 00:08:22.688 fused_ordering(255) 00:08:22.688 fused_ordering(256) 00:08:22.688 fused_ordering(257) 00:08:22.688 fused_ordering(258) 00:08:22.688 fused_ordering(259) 00:08:22.688 fused_ordering(260) 00:08:22.688 fused_ordering(261) 00:08:22.688 fused_ordering(262) 00:08:22.688 fused_ordering(263) 00:08:22.688 fused_ordering(264) 00:08:22.688 fused_ordering(265) 00:08:22.688 fused_ordering(266) 00:08:22.688 fused_ordering(267) 00:08:22.688 fused_ordering(268) 00:08:22.688 fused_ordering(269) 00:08:22.688 fused_ordering(270) 00:08:22.688 fused_ordering(271) 00:08:22.688 fused_ordering(272) 00:08:22.688 fused_ordering(273) 00:08:22.688 fused_ordering(274) 00:08:22.688 fused_ordering(275) 00:08:22.688 fused_ordering(276) 00:08:22.688 fused_ordering(277) 00:08:22.688 fused_ordering(278) 00:08:22.688 fused_ordering(279) 00:08:22.688 fused_ordering(280) 00:08:22.688 fused_ordering(281) 00:08:22.688 fused_ordering(282) 00:08:22.688 fused_ordering(283) 00:08:22.688 fused_ordering(284) 00:08:22.688 fused_ordering(285) 00:08:22.688 fused_ordering(286) 00:08:22.688 fused_ordering(287) 00:08:22.688 fused_ordering(288) 00:08:22.688 fused_ordering(289) 00:08:22.688 fused_ordering(290) 00:08:22.688 fused_ordering(291) 00:08:22.688 fused_ordering(292) 00:08:22.688 fused_ordering(293) 00:08:22.688 fused_ordering(294) 00:08:22.688 fused_ordering(295) 00:08:22.688 fused_ordering(296) 00:08:22.688 fused_ordering(297) 00:08:22.688 fused_ordering(298) 00:08:22.688 fused_ordering(299) 00:08:22.688 fused_ordering(300) 00:08:22.688 fused_ordering(301) 00:08:22.688 fused_ordering(302) 00:08:22.688 fused_ordering(303) 00:08:22.688 fused_ordering(304) 00:08:22.688 fused_ordering(305) 00:08:22.688 fused_ordering(306) 00:08:22.688 fused_ordering(307) 00:08:22.688 fused_ordering(308) 00:08:22.688 fused_ordering(309) 00:08:22.688 fused_ordering(310) 00:08:22.688 fused_ordering(311) 00:08:22.688 fused_ordering(312) 00:08:22.688 fused_ordering(313) 00:08:22.688 fused_ordering(314) 00:08:22.688 fused_ordering(315) 00:08:22.688 fused_ordering(316) 00:08:22.688 fused_ordering(317) 00:08:22.688 fused_ordering(318) 00:08:22.688 fused_ordering(319) 00:08:22.688 fused_ordering(320) 00:08:22.688 fused_ordering(321) 00:08:22.688 fused_ordering(322) 00:08:22.688 fused_ordering(323) 00:08:22.688 fused_ordering(324) 00:08:22.688 fused_ordering(325) 00:08:22.688 fused_ordering(326) 00:08:22.688 fused_ordering(327) 00:08:22.689 fused_ordering(328) 00:08:22.689 fused_ordering(329) 00:08:22.689 fused_ordering(330) 00:08:22.689 fused_ordering(331) 00:08:22.689 fused_ordering(332) 00:08:22.689 fused_ordering(333) 00:08:22.689 fused_ordering(334) 00:08:22.689 fused_ordering(335) 00:08:22.689 fused_ordering(336) 00:08:22.689 fused_ordering(337) 00:08:22.689 fused_ordering(338) 00:08:22.689 fused_ordering(339) 00:08:22.689 fused_ordering(340) 00:08:22.689 fused_ordering(341) 00:08:22.689 fused_ordering(342) 00:08:22.689 fused_ordering(343) 00:08:22.689 fused_ordering(344) 00:08:22.689 fused_ordering(345) 00:08:22.689 fused_ordering(346) 00:08:22.689 fused_ordering(347) 00:08:22.689 fused_ordering(348) 00:08:22.689 fused_ordering(349) 00:08:22.689 fused_ordering(350) 00:08:22.689 fused_ordering(351) 00:08:22.689 fused_ordering(352) 00:08:22.689 fused_ordering(353) 00:08:22.689 fused_ordering(354) 00:08:22.689 fused_ordering(355) 00:08:22.689 fused_ordering(356) 00:08:22.689 fused_ordering(357) 00:08:22.689 fused_ordering(358) 00:08:22.689 fused_ordering(359) 00:08:22.689 fused_ordering(360) 00:08:22.689 fused_ordering(361) 00:08:22.689 fused_ordering(362) 00:08:22.689 fused_ordering(363) 00:08:22.689 fused_ordering(364) 00:08:22.689 fused_ordering(365) 00:08:22.689 fused_ordering(366) 00:08:22.689 fused_ordering(367) 00:08:22.689 fused_ordering(368) 00:08:22.689 fused_ordering(369) 00:08:22.689 fused_ordering(370) 00:08:22.689 fused_ordering(371) 00:08:22.689 fused_ordering(372) 00:08:22.689 fused_ordering(373) 00:08:22.689 fused_ordering(374) 00:08:22.689 fused_ordering(375) 00:08:22.689 fused_ordering(376) 00:08:22.689 fused_ordering(377) 00:08:22.689 fused_ordering(378) 00:08:22.689 fused_ordering(379) 00:08:22.689 fused_ordering(380) 00:08:22.689 fused_ordering(381) 00:08:22.689 fused_ordering(382) 00:08:22.689 fused_ordering(383) 00:08:22.689 fused_ordering(384) 00:08:22.689 fused_ordering(385) 00:08:22.689 fused_ordering(386) 00:08:22.689 fused_ordering(387) 00:08:22.689 fused_ordering(388) 00:08:22.689 fused_ordering(389) 00:08:22.689 fused_ordering(390) 00:08:22.689 fused_ordering(391) 00:08:22.689 fused_ordering(392) 00:08:22.689 fused_ordering(393) 00:08:22.689 fused_ordering(394) 00:08:22.689 fused_ordering(395) 00:08:22.689 fused_ordering(396) 00:08:22.689 fused_ordering(397) 00:08:22.689 fused_ordering(398) 00:08:22.689 fused_ordering(399) 00:08:22.689 fused_ordering(400) 00:08:22.689 fused_ordering(401) 00:08:22.689 fused_ordering(402) 00:08:22.689 fused_ordering(403) 00:08:22.689 fused_ordering(404) 00:08:22.689 fused_ordering(405) 00:08:22.689 fused_ordering(406) 00:08:22.689 fused_ordering(407) 00:08:22.689 fused_ordering(408) 00:08:22.689 fused_ordering(409) 00:08:22.689 fused_ordering(410) 00:08:22.946 fused_ordering(411) 00:08:22.946 fused_ordering(412) 00:08:22.946 fused_ordering(413) 00:08:22.946 fused_ordering(414) 00:08:22.946 fused_ordering(415) 00:08:22.946 fused_ordering(416) 00:08:22.946 fused_ordering(417) 00:08:22.946 fused_ordering(418) 00:08:22.946 fused_ordering(419) 00:08:22.946 fused_ordering(420) 00:08:22.946 fused_ordering(421) 00:08:22.946 fused_ordering(422) 00:08:22.946 fused_ordering(423) 00:08:22.946 fused_ordering(424) 00:08:22.946 fused_ordering(425) 00:08:22.946 fused_ordering(426) 00:08:22.946 fused_ordering(427) 00:08:22.946 fused_ordering(428) 00:08:22.946 fused_ordering(429) 00:08:22.946 fused_ordering(430) 00:08:22.946 fused_ordering(431) 00:08:22.946 fused_ordering(432) 00:08:22.946 fused_ordering(433) 00:08:22.946 fused_ordering(434) 00:08:22.946 fused_ordering(435) 00:08:22.946 fused_ordering(436) 00:08:22.946 fused_ordering(437) 00:08:22.946 fused_ordering(438) 00:08:22.946 fused_ordering(439) 00:08:22.947 fused_ordering(440) 00:08:22.947 fused_ordering(441) 00:08:22.947 fused_ordering(442) 00:08:22.947 fused_ordering(443) 00:08:22.947 fused_ordering(444) 00:08:22.947 fused_ordering(445) 00:08:22.947 fused_ordering(446) 00:08:22.947 fused_ordering(447) 00:08:22.947 fused_ordering(448) 00:08:22.947 fused_ordering(449) 00:08:22.947 fused_ordering(450) 00:08:22.947 fused_ordering(451) 00:08:22.947 fused_ordering(452) 00:08:22.947 fused_ordering(453) 00:08:22.947 fused_ordering(454) 00:08:22.947 fused_ordering(455) 00:08:22.947 fused_ordering(456) 00:08:22.947 fused_ordering(457) 00:08:22.947 fused_ordering(458) 00:08:22.947 fused_ordering(459) 00:08:22.947 fused_ordering(460) 00:08:22.947 fused_ordering(461) 00:08:22.947 fused_ordering(462) 00:08:22.947 fused_ordering(463) 00:08:22.947 fused_ordering(464) 00:08:22.947 fused_ordering(465) 00:08:22.947 fused_ordering(466) 00:08:22.947 fused_ordering(467) 00:08:22.947 fused_ordering(468) 00:08:22.947 fused_ordering(469) 00:08:22.947 fused_ordering(470) 00:08:22.947 fused_ordering(471) 00:08:22.947 fused_ordering(472) 00:08:22.947 fused_ordering(473) 00:08:22.947 fused_ordering(474) 00:08:22.947 fused_ordering(475) 00:08:22.947 fused_ordering(476) 00:08:22.947 fused_ordering(477) 00:08:22.947 fused_ordering(478) 00:08:22.947 fused_ordering(479) 00:08:22.947 fused_ordering(480) 00:08:22.947 fused_ordering(481) 00:08:22.947 fused_ordering(482) 00:08:22.947 fused_ordering(483) 00:08:22.947 fused_ordering(484) 00:08:22.947 fused_ordering(485) 00:08:22.947 fused_ordering(486) 00:08:22.947 fused_ordering(487) 00:08:22.947 fused_ordering(488) 00:08:22.947 fused_ordering(489) 00:08:22.947 fused_ordering(490) 00:08:22.947 fused_ordering(491) 00:08:22.947 fused_ordering(492) 00:08:22.947 fused_ordering(493) 00:08:22.947 fused_ordering(494) 00:08:22.947 fused_ordering(495) 00:08:22.947 fused_ordering(496) 00:08:22.947 fused_ordering(497) 00:08:22.947 fused_ordering(498) 00:08:22.947 fused_ordering(499) 00:08:22.947 fused_ordering(500) 00:08:22.947 fused_ordering(501) 00:08:22.947 fused_ordering(502) 00:08:22.947 fused_ordering(503) 00:08:22.947 fused_ordering(504) 00:08:22.947 fused_ordering(505) 00:08:22.947 fused_ordering(506) 00:08:22.947 fused_ordering(507) 00:08:22.947 fused_ordering(508) 00:08:22.947 fused_ordering(509) 00:08:22.947 fused_ordering(510) 00:08:22.947 fused_ordering(511) 00:08:22.947 fused_ordering(512) 00:08:22.947 fused_ordering(513) 00:08:22.947 fused_ordering(514) 00:08:22.947 fused_ordering(515) 00:08:22.947 fused_ordering(516) 00:08:22.947 fused_ordering(517) 00:08:22.947 fused_ordering(518) 00:08:22.947 fused_ordering(519) 00:08:22.947 fused_ordering(520) 00:08:22.947 fused_ordering(521) 00:08:22.947 fused_ordering(522) 00:08:22.947 fused_ordering(523) 00:08:22.947 fused_ordering(524) 00:08:22.947 fused_ordering(525) 00:08:22.947 fused_ordering(526) 00:08:22.947 fused_ordering(527) 00:08:22.947 fused_ordering(528) 00:08:22.947 fused_ordering(529) 00:08:22.947 fused_ordering(530) 00:08:22.947 fused_ordering(531) 00:08:22.947 fused_ordering(532) 00:08:22.947 fused_ordering(533) 00:08:22.947 fused_ordering(534) 00:08:22.947 fused_ordering(535) 00:08:22.947 fused_ordering(536) 00:08:22.947 fused_ordering(537) 00:08:22.947 fused_ordering(538) 00:08:22.947 fused_ordering(539) 00:08:22.947 fused_ordering(540) 00:08:22.947 fused_ordering(541) 00:08:22.947 fused_ordering(542) 00:08:22.947 fused_ordering(543) 00:08:22.947 fused_ordering(544) 00:08:22.947 fused_ordering(545) 00:08:22.947 fused_ordering(546) 00:08:22.947 fused_ordering(547) 00:08:22.947 fused_ordering(548) 00:08:22.947 fused_ordering(549) 00:08:22.947 fused_ordering(550) 00:08:22.947 fused_ordering(551) 00:08:22.947 fused_ordering(552) 00:08:22.947 fused_ordering(553) 00:08:22.947 fused_ordering(554) 00:08:22.947 fused_ordering(555) 00:08:22.947 fused_ordering(556) 00:08:22.947 fused_ordering(557) 00:08:22.947 fused_ordering(558) 00:08:22.947 fused_ordering(559) 00:08:22.947 fused_ordering(560) 00:08:22.947 fused_ordering(561) 00:08:22.947 fused_ordering(562) 00:08:22.947 fused_ordering(563) 00:08:22.947 fused_ordering(564) 00:08:22.947 fused_ordering(565) 00:08:22.947 fused_ordering(566) 00:08:22.947 fused_ordering(567) 00:08:22.947 fused_ordering(568) 00:08:22.947 fused_ordering(569) 00:08:22.947 fused_ordering(570) 00:08:22.947 fused_ordering(571) 00:08:22.947 fused_ordering(572) 00:08:22.947 fused_ordering(573) 00:08:22.947 fused_ordering(574) 00:08:22.947 fused_ordering(575) 00:08:22.947 fused_ordering(576) 00:08:22.947 fused_ordering(577) 00:08:22.947 fused_ordering(578) 00:08:22.947 fused_ordering(579) 00:08:22.947 fused_ordering(580) 00:08:22.947 fused_ordering(581) 00:08:22.947 fused_ordering(582) 00:08:22.947 fused_ordering(583) 00:08:22.947 fused_ordering(584) 00:08:22.947 fused_ordering(585) 00:08:22.947 fused_ordering(586) 00:08:22.947 fused_ordering(587) 00:08:22.947 fused_ordering(588) 00:08:22.947 fused_ordering(589) 00:08:22.947 fused_ordering(590) 00:08:22.947 fused_ordering(591) 00:08:22.947 fused_ordering(592) 00:08:22.947 fused_ordering(593) 00:08:22.947 fused_ordering(594) 00:08:22.947 fused_ordering(595) 00:08:22.947 fused_ordering(596) 00:08:22.947 fused_ordering(597) 00:08:22.947 fused_ordering(598) 00:08:22.947 fused_ordering(599) 00:08:22.947 fused_ordering(600) 00:08:22.947 fused_ordering(601) 00:08:22.947 fused_ordering(602) 00:08:22.947 fused_ordering(603) 00:08:22.947 fused_ordering(604) 00:08:22.947 fused_ordering(605) 00:08:22.947 fused_ordering(606) 00:08:22.947 fused_ordering(607) 00:08:22.947 fused_ordering(608) 00:08:22.947 fused_ordering(609) 00:08:22.947 fused_ordering(610) 00:08:22.947 fused_ordering(611) 00:08:22.947 fused_ordering(612) 00:08:22.947 fused_ordering(613) 00:08:22.947 fused_ordering(614) 00:08:22.947 fused_ordering(615) 00:08:23.512 fused_ordering(616) 00:08:23.512 fused_ordering(617) 00:08:23.512 fused_ordering(618) 00:08:23.512 fused_ordering(619) 00:08:23.512 fused_ordering(620) 00:08:23.512 fused_ordering(621) 00:08:23.512 fused_ordering(622) 00:08:23.512 fused_ordering(623) 00:08:23.512 fused_ordering(624) 00:08:23.512 fused_ordering(625) 00:08:23.512 fused_ordering(626) 00:08:23.512 fused_ordering(627) 00:08:23.512 fused_ordering(628) 00:08:23.512 fused_ordering(629) 00:08:23.512 fused_ordering(630) 00:08:23.512 fused_ordering(631) 00:08:23.512 fused_ordering(632) 00:08:23.512 fused_ordering(633) 00:08:23.512 fused_ordering(634) 00:08:23.512 fused_ordering(635) 00:08:23.512 fused_ordering(636) 00:08:23.512 fused_ordering(637) 00:08:23.512 fused_ordering(638) 00:08:23.512 fused_ordering(639) 00:08:23.512 fused_ordering(640) 00:08:23.512 fused_ordering(641) 00:08:23.512 fused_ordering(642) 00:08:23.512 fused_ordering(643) 00:08:23.512 fused_ordering(644) 00:08:23.512 fused_ordering(645) 00:08:23.512 fused_ordering(646) 00:08:23.512 fused_ordering(647) 00:08:23.512 fused_ordering(648) 00:08:23.512 fused_ordering(649) 00:08:23.512 fused_ordering(650) 00:08:23.512 fused_ordering(651) 00:08:23.512 fused_ordering(652) 00:08:23.512 fused_ordering(653) 00:08:23.512 fused_ordering(654) 00:08:23.512 fused_ordering(655) 00:08:23.512 fused_ordering(656) 00:08:23.512 fused_ordering(657) 00:08:23.512 fused_ordering(658) 00:08:23.512 fused_ordering(659) 00:08:23.512 fused_ordering(660) 00:08:23.512 fused_ordering(661) 00:08:23.512 fused_ordering(662) 00:08:23.512 fused_ordering(663) 00:08:23.512 fused_ordering(664) 00:08:23.512 fused_ordering(665) 00:08:23.512 fused_ordering(666) 00:08:23.512 fused_ordering(667) 00:08:23.512 fused_ordering(668) 00:08:23.512 fused_ordering(669) 00:08:23.512 fused_ordering(670) 00:08:23.512 fused_ordering(671) 00:08:23.512 fused_ordering(672) 00:08:23.512 fused_ordering(673) 00:08:23.512 fused_ordering(674) 00:08:23.512 fused_ordering(675) 00:08:23.512 fused_ordering(676) 00:08:23.512 fused_ordering(677) 00:08:23.512 fused_ordering(678) 00:08:23.512 fused_ordering(679) 00:08:23.512 fused_ordering(680) 00:08:23.512 fused_ordering(681) 00:08:23.512 fused_ordering(682) 00:08:23.512 fused_ordering(683) 00:08:23.512 fused_ordering(684) 00:08:23.512 fused_ordering(685) 00:08:23.512 fused_ordering(686) 00:08:23.512 fused_ordering(687) 00:08:23.512 fused_ordering(688) 00:08:23.512 fused_ordering(689) 00:08:23.512 fused_ordering(690) 00:08:23.512 fused_ordering(691) 00:08:23.512 fused_ordering(692) 00:08:23.512 fused_ordering(693) 00:08:23.512 fused_ordering(694) 00:08:23.512 fused_ordering(695) 00:08:23.512 fused_ordering(696) 00:08:23.512 fused_ordering(697) 00:08:23.512 fused_ordering(698) 00:08:23.512 fused_ordering(699) 00:08:23.512 fused_ordering(700) 00:08:23.512 fused_ordering(701) 00:08:23.512 fused_ordering(702) 00:08:23.512 fused_ordering(703) 00:08:23.512 fused_ordering(704) 00:08:23.512 fused_ordering(705) 00:08:23.512 fused_ordering(706) 00:08:23.512 fused_ordering(707) 00:08:23.512 fused_ordering(708) 00:08:23.512 fused_ordering(709) 00:08:23.512 fused_ordering(710) 00:08:23.512 fused_ordering(711) 00:08:23.512 fused_ordering(712) 00:08:23.513 fused_ordering(713) 00:08:23.513 fused_ordering(714) 00:08:23.513 fused_ordering(715) 00:08:23.513 fused_ordering(716) 00:08:23.513 fused_ordering(717) 00:08:23.513 fused_ordering(718) 00:08:23.513 fused_ordering(719) 00:08:23.513 fused_ordering(720) 00:08:23.513 fused_ordering(721) 00:08:23.513 fused_ordering(722) 00:08:23.513 fused_ordering(723) 00:08:23.513 fused_ordering(724) 00:08:23.513 fused_ordering(725) 00:08:23.513 fused_ordering(726) 00:08:23.513 fused_ordering(727) 00:08:23.513 fused_ordering(728) 00:08:23.513 fused_ordering(729) 00:08:23.513 fused_ordering(730) 00:08:23.513 fused_ordering(731) 00:08:23.513 fused_ordering(732) 00:08:23.513 fused_ordering(733) 00:08:23.513 fused_ordering(734) 00:08:23.513 fused_ordering(735) 00:08:23.513 fused_ordering(736) 00:08:23.513 fused_ordering(737) 00:08:23.513 fused_ordering(738) 00:08:23.513 fused_ordering(739) 00:08:23.513 fused_ordering(740) 00:08:23.513 fused_ordering(741) 00:08:23.513 fused_ordering(742) 00:08:23.513 fused_ordering(743) 00:08:23.513 fused_ordering(744) 00:08:23.513 fused_ordering(745) 00:08:23.513 fused_ordering(746) 00:08:23.513 fused_ordering(747) 00:08:23.513 fused_ordering(748) 00:08:23.513 fused_ordering(749) 00:08:23.513 fused_ordering(750) 00:08:23.513 fused_ordering(751) 00:08:23.513 fused_ordering(752) 00:08:23.513 fused_ordering(753) 00:08:23.513 fused_ordering(754) 00:08:23.513 fused_ordering(755) 00:08:23.513 fused_ordering(756) 00:08:23.513 fused_ordering(757) 00:08:23.513 fused_ordering(758) 00:08:23.513 fused_ordering(759) 00:08:23.513 fused_ordering(760) 00:08:23.513 fused_ordering(761) 00:08:23.513 fused_ordering(762) 00:08:23.513 fused_ordering(763) 00:08:23.513 fused_ordering(764) 00:08:23.513 fused_ordering(765) 00:08:23.513 fused_ordering(766) 00:08:23.513 fused_ordering(767) 00:08:23.513 fused_ordering(768) 00:08:23.513 fused_ordering(769) 00:08:23.513 fused_ordering(770) 00:08:23.513 fused_ordering(771) 00:08:23.513 fused_ordering(772) 00:08:23.513 fused_ordering(773) 00:08:23.513 fused_ordering(774) 00:08:23.513 fused_ordering(775) 00:08:23.513 fused_ordering(776) 00:08:23.513 fused_ordering(777) 00:08:23.513 fused_ordering(778) 00:08:23.513 fused_ordering(779) 00:08:23.513 fused_ordering(780) 00:08:23.513 fused_ordering(781) 00:08:23.513 fused_ordering(782) 00:08:23.513 fused_ordering(783) 00:08:23.513 fused_ordering(784) 00:08:23.513 fused_ordering(785) 00:08:23.513 fused_ordering(786) 00:08:23.513 fused_ordering(787) 00:08:23.513 fused_ordering(788) 00:08:23.513 fused_ordering(789) 00:08:23.513 fused_ordering(790) 00:08:23.513 fused_ordering(791) 00:08:23.513 fused_ordering(792) 00:08:23.513 fused_ordering(793) 00:08:23.513 fused_ordering(794) 00:08:23.513 fused_ordering(795) 00:08:23.513 fused_ordering(796) 00:08:23.513 fused_ordering(797) 00:08:23.513 fused_ordering(798) 00:08:23.513 fused_ordering(799) 00:08:23.513 fused_ordering(800) 00:08:23.513 fused_ordering(801) 00:08:23.513 fused_ordering(802) 00:08:23.513 fused_ordering(803) 00:08:23.513 fused_ordering(804) 00:08:23.513 fused_ordering(805) 00:08:23.513 fused_ordering(806) 00:08:23.513 fused_ordering(807) 00:08:23.513 fused_ordering(808) 00:08:23.513 fused_ordering(809) 00:08:23.513 fused_ordering(810) 00:08:23.513 fused_ordering(811) 00:08:23.513 fused_ordering(812) 00:08:23.513 fused_ordering(813) 00:08:23.513 fused_ordering(814) 00:08:23.513 fused_ordering(815) 00:08:23.513 fused_ordering(816) 00:08:23.513 fused_ordering(817) 00:08:23.513 fused_ordering(818) 00:08:23.513 fused_ordering(819) 00:08:23.513 fused_ordering(820) 00:08:24.081 fused_ordering(821) 00:08:24.081 fused_ordering(822) 00:08:24.081 fused_ordering(823) 00:08:24.081 fused_ordering(824) 00:08:24.081 fused_ordering(825) 00:08:24.081 fused_ordering(826) 00:08:24.081 fused_ordering(827) 00:08:24.081 fused_ordering(828) 00:08:24.081 fused_ordering(829) 00:08:24.081 fused_ordering(830) 00:08:24.081 fused_ordering(831) 00:08:24.081 fused_ordering(832) 00:08:24.081 fused_ordering(833) 00:08:24.081 fused_ordering(834) 00:08:24.081 fused_ordering(835) 00:08:24.081 fused_ordering(836) 00:08:24.081 fused_ordering(837) 00:08:24.081 fused_ordering(838) 00:08:24.081 fused_ordering(839) 00:08:24.081 fused_ordering(840) 00:08:24.081 fused_ordering(841) 00:08:24.081 fused_ordering(842) 00:08:24.081 fused_ordering(843) 00:08:24.081 fused_ordering(844) 00:08:24.081 fused_ordering(845) 00:08:24.081 fused_ordering(846) 00:08:24.081 fused_ordering(847) 00:08:24.081 fused_ordering(848) 00:08:24.081 fused_ordering(849) 00:08:24.081 fused_ordering(850) 00:08:24.081 fused_ordering(851) 00:08:24.081 fused_ordering(852) 00:08:24.081 fused_ordering(853) 00:08:24.081 fused_ordering(854) 00:08:24.081 fused_ordering(855) 00:08:24.081 fused_ordering(856) 00:08:24.081 fused_ordering(857) 00:08:24.081 fused_ordering(858) 00:08:24.081 fused_ordering(859) 00:08:24.081 fused_ordering(860) 00:08:24.081 fused_ordering(861) 00:08:24.081 fused_ordering(862) 00:08:24.081 fused_ordering(863) 00:08:24.081 fused_ordering(864) 00:08:24.081 fused_ordering(865) 00:08:24.081 fused_ordering(866) 00:08:24.081 fused_ordering(867) 00:08:24.081 fused_ordering(868) 00:08:24.081 fused_ordering(869) 00:08:24.081 fused_ordering(870) 00:08:24.081 fused_ordering(871) 00:08:24.081 fused_ordering(872) 00:08:24.081 fused_ordering(873) 00:08:24.081 fused_ordering(874) 00:08:24.081 fused_ordering(875) 00:08:24.081 fused_ordering(876) 00:08:24.082 fused_ordering(877) 00:08:24.082 fused_ordering(878) 00:08:24.082 fused_ordering(879) 00:08:24.082 fused_ordering(880) 00:08:24.082 fused_ordering(881) 00:08:24.082 fused_ordering(882) 00:08:24.082 fused_ordering(883) 00:08:24.082 fused_ordering(884) 00:08:24.082 fused_ordering(885) 00:08:24.082 fused_ordering(886) 00:08:24.082 fused_ordering(887) 00:08:24.082 fused_ordering(888) 00:08:24.082 fused_ordering(889) 00:08:24.082 fused_ordering(890) 00:08:24.082 fused_ordering(891) 00:08:24.082 fused_ordering(892) 00:08:24.082 fused_ordering(893) 00:08:24.082 fused_ordering(894) 00:08:24.082 fused_ordering(895) 00:08:24.082 fused_ordering(896) 00:08:24.082 fused_ordering(897) 00:08:24.082 fused_ordering(898) 00:08:24.082 fused_ordering(899) 00:08:24.082 fused_ordering(900) 00:08:24.082 fused_ordering(901) 00:08:24.082 fused_ordering(902) 00:08:24.082 fused_ordering(903) 00:08:24.082 fused_ordering(904) 00:08:24.082 fused_ordering(905) 00:08:24.082 fused_ordering(906) 00:08:24.082 fused_ordering(907) 00:08:24.082 fused_ordering(908) 00:08:24.082 fused_ordering(909) 00:08:24.082 fused_ordering(910) 00:08:24.082 fused_ordering(911) 00:08:24.082 fused_ordering(912) 00:08:24.082 fused_ordering(913) 00:08:24.082 fused_ordering(914) 00:08:24.082 fused_ordering(915) 00:08:24.082 fused_ordering(916) 00:08:24.082 fused_ordering(917) 00:08:24.082 fused_ordering(918) 00:08:24.082 fused_ordering(919) 00:08:24.082 fused_ordering(920) 00:08:24.082 fused_ordering(921) 00:08:24.082 fused_ordering(922) 00:08:24.082 fused_ordering(923) 00:08:24.082 fused_ordering(924) 00:08:24.082 fused_ordering(925) 00:08:24.082 fused_ordering(926) 00:08:24.082 fused_ordering(927) 00:08:24.082 fused_ordering(928) 00:08:24.082 fused_ordering(929) 00:08:24.082 fused_ordering(930) 00:08:24.082 fused_ordering(931) 00:08:24.082 fused_ordering(932) 00:08:24.082 fused_ordering(933) 00:08:24.082 fused_ordering(934) 00:08:24.082 fused_ordering(935) 00:08:24.082 fused_ordering(936) 00:08:24.082 fused_ordering(937) 00:08:24.082 fused_ordering(938) 00:08:24.082 fused_ordering(939) 00:08:24.082 fused_ordering(940) 00:08:24.082 fused_ordering(941) 00:08:24.082 fused_ordering(942) 00:08:24.082 fused_ordering(943) 00:08:24.082 fused_ordering(944) 00:08:24.082 fused_ordering(945) 00:08:24.082 fused_ordering(946) 00:08:24.082 fused_ordering(947) 00:08:24.082 fused_ordering(948) 00:08:24.082 fused_ordering(949) 00:08:24.082 fused_ordering(950) 00:08:24.082 fused_ordering(951) 00:08:24.082 fused_ordering(952) 00:08:24.082 fused_ordering(953) 00:08:24.082 fused_ordering(954) 00:08:24.082 fused_ordering(955) 00:08:24.082 fused_ordering(956) 00:08:24.082 fused_ordering(957) 00:08:24.082 fused_ordering(958) 00:08:24.082 fused_ordering(959) 00:08:24.082 fused_ordering(960) 00:08:24.082 fused_ordering(961) 00:08:24.082 fused_ordering(962) 00:08:24.082 fused_ordering(963) 00:08:24.082 fused_ordering(964) 00:08:24.082 fused_ordering(965) 00:08:24.082 fused_ordering(966) 00:08:24.082 fused_ordering(967) 00:08:24.082 fused_ordering(968) 00:08:24.082 fused_ordering(969) 00:08:24.082 fused_ordering(970) 00:08:24.082 fused_ordering(971) 00:08:24.082 fused_ordering(972) 00:08:24.082 fused_ordering(973) 00:08:24.082 fused_ordering(974) 00:08:24.082 fused_ordering(975) 00:08:24.082 fused_ordering(976) 00:08:24.082 fused_ordering(977) 00:08:24.082 fused_ordering(978) 00:08:24.082 fused_ordering(979) 00:08:24.082 fused_ordering(980) 00:08:24.082 fused_ordering(981) 00:08:24.082 fused_ordering(982) 00:08:24.082 fused_ordering(983) 00:08:24.082 fused_ordering(984) 00:08:24.082 fused_ordering(985) 00:08:24.082 fused_ordering(986) 00:08:24.082 fused_ordering(987) 00:08:24.082 fused_ordering(988) 00:08:24.082 fused_ordering(989) 00:08:24.082 fused_ordering(990) 00:08:24.082 fused_ordering(991) 00:08:24.082 fused_ordering(992) 00:08:24.082 fused_ordering(993) 00:08:24.082 fused_ordering(994) 00:08:24.082 fused_ordering(995) 00:08:24.082 fused_ordering(996) 00:08:24.082 fused_ordering(997) 00:08:24.082 fused_ordering(998) 00:08:24.082 fused_ordering(999) 00:08:24.082 fused_ordering(1000) 00:08:24.082 fused_ordering(1001) 00:08:24.082 fused_ordering(1002) 00:08:24.082 fused_ordering(1003) 00:08:24.082 fused_ordering(1004) 00:08:24.082 fused_ordering(1005) 00:08:24.082 fused_ordering(1006) 00:08:24.082 fused_ordering(1007) 00:08:24.082 fused_ordering(1008) 00:08:24.082 fused_ordering(1009) 00:08:24.082 fused_ordering(1010) 00:08:24.082 fused_ordering(1011) 00:08:24.082 fused_ordering(1012) 00:08:24.082 fused_ordering(1013) 00:08:24.082 fused_ordering(1014) 00:08:24.082 fused_ordering(1015) 00:08:24.082 fused_ordering(1016) 00:08:24.082 fused_ordering(1017) 00:08:24.082 fused_ordering(1018) 00:08:24.082 fused_ordering(1019) 00:08:24.082 fused_ordering(1020) 00:08:24.082 fused_ordering(1021) 00:08:24.082 fused_ordering(1022) 00:08:24.082 fused_ordering(1023) 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:24.082 rmmod nvme_tcp 00:08:24.082 rmmod nvme_fabrics 00:08:24.082 rmmod nvme_keyring 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 71334 ']' 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 71334 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 71334 ']' 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 71334 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71334 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:24.082 killing process with pid 71334 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71334' 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 71334 00:08:24.082 20:24:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 71334 00:08:24.341 20:24:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:24.341 20:24:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:24.341 20:24:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:24.341 20:24:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:24.341 20:24:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:24.341 20:24:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.341 20:24:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.341 20:24:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.341 20:24:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:24.341 00:08:24.341 real 0m3.467s 00:08:24.341 user 0m4.204s 00:08:24.341 sys 0m1.329s 00:08:24.341 20:24:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:24.341 ************************************ 00:08:24.341 END TEST nvmf_fused_ordering 00:08:24.341 ************************************ 00:08:24.341 20:24:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:24.341 20:24:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:24.341 20:24:45 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:24.341 20:24:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:24.341 20:24:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.341 20:24:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:24.341 ************************************ 00:08:24.341 START TEST nvmf_delete_subsystem 00:08:24.341 ************************************ 00:08:24.341 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:24.600 * Looking for test storage... 00:08:24.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:24.600 Cannot find device "nvmf_tgt_br" 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:24.600 Cannot find device "nvmf_tgt_br2" 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:24.600 Cannot find device "nvmf_tgt_br" 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:24.600 Cannot find device "nvmf_tgt_br2" 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:08:24.600 20:24:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:24.600 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:24.601 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:24.601 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:24.601 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:08:24.601 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:24.601 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:24.601 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:08:24.601 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:24.601 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:24.601 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:24.601 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:24.601 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:24.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:08:24.860 00:08:24.860 --- 10.0.0.2 ping statistics --- 00:08:24.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.860 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:24.860 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:24.860 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:08:24.860 00:08:24.860 --- 10.0.0.3 ping statistics --- 00:08:24.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.860 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:24.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:24.860 00:08:24.860 --- 10.0.0.1 ping statistics --- 00:08:24.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.860 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=71559 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 71559 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 71559 ']' 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.860 20:24:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:24.860 [2024-07-15 20:24:46.334089] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:08:24.860 [2024-07-15 20:24:46.334185] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.119 [2024-07-15 20:24:46.467760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:25.119 [2024-07-15 20:24:46.525619] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.119 [2024-07-15 20:24:46.525679] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.119 [2024-07-15 20:24:46.525690] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.119 [2024-07-15 20:24:46.525699] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.119 [2024-07-15 20:24:46.525706] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.119 [2024-07-15 20:24:46.525790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.119 [2024-07-15 20:24:46.525794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.056 [2024-07-15 20:24:47.373499] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.056 [2024-07-15 20:24:47.393595] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.056 NULL1 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.056 Delay0 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=71614 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:26.056 20:24:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:26.315 [2024-07-15 20:24:47.595402] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:28.215 20:24:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:28.215 20:24:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:28.215 20:24:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Write completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 starting I/O failed: -6 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.215 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 starting I/O failed: -6 00:08:28.216 starting I/O failed: -6 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 Write completed with error (sct=0, sc=8) 00:08:28.216 Read completed with error (sct=0, sc=8) 00:08:28.216 starting I/O failed: -6 00:08:28.216 starting I/O failed: -6 00:08:28.216 starting I/O failed: -6 00:08:29.148 [2024-07-15 20:24:50.609777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79c510 is same with the state(5) to be set 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 [2024-07-15 20:24:50.631389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fde3000d740 is same with the state(5) to be set 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 [2024-07-15 20:24:50.631665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fde3000cfe0 is same with the state(5) to be set 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Write completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.148 Read completed with error (sct=0, sc=8) 00:08:29.149 Write completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Write completed with error (sct=0, sc=8) 00:08:29.149 Write completed with error (sct=0, sc=8) 00:08:29.149 [2024-07-15 20:24:50.633550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7be4c0 is same with the state(5) to be set 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Write completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Write completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Write completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Write completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Write completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Write completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Write completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 Read completed with error (sct=0, sc=8) 00:08:29.149 [2024-07-15 20:24:50.634342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79c6f0 is same with the state(5) to be set 00:08:29.149 Initializing NVMe Controllers 00:08:29.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:29.149 Controller IO queue size 128, less than required. 00:08:29.149 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:29.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:29.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:29.149 Initialization complete. Launching workers. 00:08:29.149 ======================================================== 00:08:29.149 Latency(us) 00:08:29.149 Device Information : IOPS MiB/s Average min max 00:08:29.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 182.39 0.09 909862.91 888.69 1011814.27 00:08:29.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.54 0.08 1008998.54 418.22 1999887.24 00:08:29.149 ======================================================== 00:08:29.149 Total : 340.93 0.17 955962.42 418.22 1999887.24 00:08:29.149 00:08:29.149 [2024-07-15 20:24:50.634851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x79c510 (9): Bad file descriptor 00:08:29.149 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:29.149 20:24:50 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.149 20:24:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:29.149 20:24:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71614 00:08:29.149 20:24:50 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 71614 00:08:29.715 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (71614) - No such process 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 71614 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 71614 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 71614 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.715 [2024-07-15 20:24:51.160765] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=71660 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71660 00:08:29.715 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:29.973 [2024-07-15 20:24:51.328127] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:30.231 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:30.231 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71660 00:08:30.231 20:24:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:30.797 20:24:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:30.797 20:24:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71660 00:08:30.797 20:24:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:31.363 20:24:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:31.363 20:24:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71660 00:08:31.363 20:24:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:31.932 20:24:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:31.932 20:24:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71660 00:08:31.932 20:24:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:32.496 20:24:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:32.496 20:24:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71660 00:08:32.496 20:24:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:32.754 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:32.754 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71660 00:08:32.754 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:33.011 Initializing NVMe Controllers 00:08:33.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:33.011 Controller IO queue size 128, less than required. 00:08:33.011 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:33.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:33.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:33.011 Initialization complete. Launching workers. 00:08:33.011 ======================================================== 00:08:33.011 Latency(us) 00:08:33.011 Device Information : IOPS MiB/s Average min max 00:08:33.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003324.45 1000180.87 1010146.24 00:08:33.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006425.25 1000228.36 1043005.29 00:08:33.011 ======================================================== 00:08:33.011 Total : 256.00 0.12 1004874.85 1000180.87 1043005.29 00:08:33.011 00:08:33.267 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:33.267 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 71660 00:08:33.267 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (71660) - No such process 00:08:33.267 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 71660 00:08:33.267 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:33.267 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:33.267 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:33.267 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:33.267 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:33.267 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:33.267 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:33.267 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:33.267 rmmod nvme_tcp 00:08:33.267 rmmod nvme_fabrics 00:08:33.525 rmmod nvme_keyring 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 71559 ']' 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 71559 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 71559 ']' 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 71559 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71559 00:08:33.525 killing process with pid 71559 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71559' 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 71559 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 71559 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.525 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.526 20:24:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.784 20:24:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:33.784 00:08:33.784 real 0m9.246s 00:08:33.784 user 0m28.665s 00:08:33.784 sys 0m1.526s 00:08:33.784 20:24:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:33.784 20:24:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:33.784 ************************************ 00:08:33.784 END TEST nvmf_delete_subsystem 00:08:33.784 ************************************ 00:08:33.784 20:24:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:33.784 20:24:55 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:08:33.784 20:24:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:33.784 20:24:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.784 20:24:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:33.784 ************************************ 00:08:33.784 START TEST nvmf_ns_masking 00:08:33.784 ************************************ 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:08:33.784 * Looking for test storage... 00:08:33.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=b5e02eb9-d27e-4b5e-8d98-ac39973188f1 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=de06b37a-157b-4367-ba1e-dbe8ad9a76d8 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=60be481f-5194-4591-9340-88b8d289388b 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:33.784 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:33.785 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.785 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:33.785 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:33.785 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:33.785 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:33.785 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:33.785 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:33.785 Cannot find device "nvmf_tgt_br" 00:08:33.785 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:08:33.785 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:33.785 Cannot find device "nvmf_tgt_br2" 00:08:33.785 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:08:33.785 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:33.785 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:33.785 Cannot find device "nvmf_tgt_br" 00:08:33.785 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:08:33.785 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:34.042 Cannot find device "nvmf_tgt_br2" 00:08:34.042 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:08:34.042 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:34.042 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:34.042 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:34.042 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.042 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:08:34.042 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:34.042 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:34.042 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:08:34.042 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:34.042 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:34.042 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:34.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:08:34.043 00:08:34.043 --- 10.0.0.2 ping statistics --- 00:08:34.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.043 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:34.043 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:34.043 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:08:34.043 00:08:34.043 --- 10.0.0.3 ping statistics --- 00:08:34.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.043 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:34.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:08:34.043 00:08:34.043 --- 10.0.0.1 ping statistics --- 00:08:34.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.043 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:34.043 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:34.302 20:24:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:08:34.302 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:34.302 20:24:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:34.302 20:24:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:34.302 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=71895 00:08:34.302 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 71895 00:08:34.302 20:24:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 71895 ']' 00:08:34.302 20:24:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:08:34.302 20:24:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.302 20:24:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:34.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.302 20:24:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.302 20:24:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:34.302 20:24:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:34.302 [2024-07-15 20:24:55.624847] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:08:34.302 [2024-07-15 20:24:55.624959] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.302 [2024-07-15 20:24:55.765677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.560 [2024-07-15 20:24:55.838009] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.560 [2024-07-15 20:24:55.838071] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.560 [2024-07-15 20:24:55.838086] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.560 [2024-07-15 20:24:55.838097] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.560 [2024-07-15 20:24:55.838105] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.560 [2024-07-15 20:24:55.838142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.127 20:24:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:35.127 20:24:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:08:35.127 20:24:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:35.127 20:24:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:35.127 20:24:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:35.409 20:24:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.409 20:24:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:35.667 [2024-07-15 20:24:56.937189] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.667 20:24:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:08:35.667 20:24:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:08:35.667 20:24:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:08:35.925 Malloc1 00:08:35.925 20:24:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:08:36.183 Malloc2 00:08:36.183 20:24:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:36.441 20:24:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:08:36.698 20:24:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.956 [2024-07-15 20:24:58.416096] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.956 20:24:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:08:36.956 20:24:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 60be481f-5194-4591-9340-88b8d289388b -a 10.0.0.2 -s 4420 -i 4 00:08:37.214 20:24:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:08:37.214 20:24:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:08:37.214 20:24:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:37.214 20:24:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:37.214 20:24:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:08:39.116 20:25:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:39.116 20:25:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:39.116 20:25:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:39.116 20:25:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:39.116 20:25:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:39.116 20:25:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:08:39.116 20:25:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:08:39.116 20:25:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:08:39.374 20:25:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:08:39.374 20:25:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:08:39.374 20:25:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:08:39.374 20:25:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:39.374 20:25:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:39.374 [ 0]:0x1 00:08:39.374 20:25:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:39.374 20:25:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:39.374 20:25:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a24bee0dfc0b431d9ddaa96a8a01416a 00:08:39.374 20:25:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a24bee0dfc0b431d9ddaa96a8a01416a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:39.374 20:25:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:08:39.634 20:25:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:08:39.634 20:25:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:39.634 20:25:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:39.634 [ 0]:0x1 00:08:39.634 20:25:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:39.634 20:25:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:39.634 20:25:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a24bee0dfc0b431d9ddaa96a8a01416a 00:08:39.634 20:25:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a24bee0dfc0b431d9ddaa96a8a01416a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:39.634 20:25:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:08:39.634 20:25:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:39.634 20:25:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:39.634 [ 1]:0x2 00:08:39.634 20:25:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:39.634 20:25:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:39.634 20:25:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=313945881699466db0a1e447b1011691 00:08:39.634 20:25:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 313945881699466db0a1e447b1011691 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:39.634 20:25:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:08:39.634 20:25:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:39.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.892 20:25:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.151 20:25:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:08:40.409 20:25:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:08:40.409 20:25:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 60be481f-5194-4591-9340-88b8d289388b -a 10.0.0.2 -s 4420 -i 4 00:08:40.409 20:25:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:08:40.409 20:25:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:08:40.409 20:25:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:40.409 20:25:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:08:40.409 20:25:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:08:40.409 20:25:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:42.940 20:25:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:42.940 [ 0]:0x2 00:08:42.940 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:42.940 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:42.940 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=313945881699466db0a1e447b1011691 00:08:42.940 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 313945881699466db0a1e447b1011691 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:42.940 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:42.940 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:08:42.940 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:42.940 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:42.940 [ 0]:0x1 00:08:42.940 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:42.940 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:42.940 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a24bee0dfc0b431d9ddaa96a8a01416a 00:08:42.940 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a24bee0dfc0b431d9ddaa96a8a01416a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:42.940 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:08:42.940 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:42.940 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:42.940 [ 1]:0x2 00:08:42.940 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:42.940 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:43.204 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=313945881699466db0a1e447b1011691 00:08:43.204 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 313945881699466db0a1e447b1011691 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:43.204 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:43.464 [ 0]:0x2 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=313945881699466db0a1e447b1011691 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 313945881699466db0a1e447b1011691 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:43.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.464 20:25:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:44.028 20:25:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:08:44.028 20:25:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 60be481f-5194-4591-9340-88b8d289388b -a 10.0.0.2 -s 4420 -i 4 00:08:44.028 20:25:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:08:44.028 20:25:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:08:44.028 20:25:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:44.028 20:25:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:08:44.028 20:25:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:08:44.028 20:25:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:08:45.925 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:45.925 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:45.925 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:45.925 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:08:45.925 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:45.925 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:08:45.925 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:08:45.925 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:08:45.925 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:08:45.925 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:08:45.925 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:08:45.925 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:45.925 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:45.925 [ 0]:0x1 00:08:45.925 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:45.925 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:46.182 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a24bee0dfc0b431d9ddaa96a8a01416a 00:08:46.182 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a24bee0dfc0b431d9ddaa96a8a01416a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:46.182 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:08:46.182 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:46.182 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:46.182 [ 1]:0x2 00:08:46.182 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:46.182 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:46.182 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=313945881699466db0a1e447b1011691 00:08:46.182 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 313945881699466db0a1e447b1011691 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:46.182 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:46.440 [ 0]:0x2 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:46.440 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:46.699 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=313945881699466db0a1e447b1011691 00:08:46.699 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 313945881699466db0a1e447b1011691 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:46.699 20:25:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:08:46.699 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:46.699 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:08:46.699 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.699 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:46.699 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.699 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:46.699 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.699 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:46.699 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:46.699 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:46.699 20:25:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:08:46.958 [2024-07-15 20:25:08.235307] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:08:46.958 2024/07/15 20:25:08 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:08:46.958 request: 00:08:46.958 { 00:08:46.958 "method": "nvmf_ns_remove_host", 00:08:46.958 "params": { 00:08:46.958 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:46.958 "nsid": 2, 00:08:46.958 "host": "nqn.2016-06.io.spdk:host1" 00:08:46.958 } 00:08:46.958 } 00:08:46.958 Got JSON-RPC error response 00:08:46.958 GoRPCClient: error on JSON-RPC call 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:08:46.958 [ 0]:0x2 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=313945881699466db0a1e447b1011691 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 313945881699466db0a1e447b1011691 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:46.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=72284 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 72284 /var/tmp/host.sock 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 72284 ']' 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:46.958 20:25:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:47.217 [2024-07-15 20:25:08.500610] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:08:47.217 [2024-07-15 20:25:08.500725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72284 ] 00:08:47.217 [2024-07-15 20:25:08.638540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.217 [2024-07-15 20:25:08.711361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.150 20:25:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:48.150 20:25:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:08:48.150 20:25:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.409 20:25:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:48.668 20:25:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid b5e02eb9-d27e-4b5e-8d98-ac39973188f1 00:08:48.668 20:25:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:08:48.668 20:25:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g B5E02EB9D27E4B5E8D98AC39973188F1 -i 00:08:49.235 20:25:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid de06b37a-157b-4367-ba1e-dbe8ad9a76d8 00:08:49.235 20:25:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:08:49.235 20:25:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g DE06B37A157B4367BA1EDBE8AD9A76D8 -i 00:08:49.494 20:25:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:08:49.752 20:25:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:08:50.011 20:25:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:08:50.011 20:25:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:08:50.269 nvme0n1 00:08:50.269 20:25:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:08:50.269 20:25:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:08:50.836 nvme1n2 00:08:50.836 20:25:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:08:50.836 20:25:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:08:50.836 20:25:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:08:50.836 20:25:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:08:50.836 20:25:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:08:51.094 20:25:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:08:51.094 20:25:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:08:51.094 20:25:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:08:51.094 20:25:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:08:51.352 20:25:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ b5e02eb9-d27e-4b5e-8d98-ac39973188f1 == \b\5\e\0\2\e\b\9\-\d\2\7\e\-\4\b\5\e\-\8\d\9\8\-\a\c\3\9\9\7\3\1\8\8\f\1 ]] 00:08:51.352 20:25:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:08:51.352 20:25:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:08:51.352 20:25:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:08:51.610 20:25:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ de06b37a-157b-4367-ba1e-dbe8ad9a76d8 == \d\e\0\6\b\3\7\a\-\1\5\7\b\-\4\3\6\7\-\b\a\1\e\-\d\b\e\8\a\d\9\a\7\6\d\8 ]] 00:08:51.610 20:25:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 72284 00:08:51.610 20:25:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 72284 ']' 00:08:51.610 20:25:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 72284 00:08:51.610 20:25:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:08:51.610 20:25:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:51.611 20:25:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72284 00:08:51.611 killing process with pid 72284 00:08:51.611 20:25:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:51.611 20:25:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:51.611 20:25:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72284' 00:08:51.611 20:25:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 72284 00:08:51.611 20:25:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 72284 00:08:51.869 20:25:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:52.127 20:25:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:08:52.127 20:25:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:08:52.127 20:25:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:52.127 20:25:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:08:52.386 20:25:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:52.386 20:25:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:08:52.386 20:25:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:52.386 20:25:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:52.386 rmmod nvme_tcp 00:08:52.386 rmmod nvme_fabrics 00:08:52.386 rmmod nvme_keyring 00:08:52.386 20:25:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:52.386 20:25:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:08:52.386 20:25:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:08:52.386 20:25:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 71895 ']' 00:08:52.386 20:25:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 71895 00:08:52.386 20:25:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 71895 ']' 00:08:52.386 20:25:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 71895 00:08:52.386 20:25:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:08:52.386 20:25:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:52.386 20:25:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 71895 00:08:52.386 killing process with pid 71895 00:08:52.386 20:25:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:52.386 20:25:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:52.386 20:25:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 71895' 00:08:52.386 20:25:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 71895 00:08:52.386 20:25:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 71895 00:08:52.644 20:25:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:52.644 20:25:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:52.644 20:25:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:52.644 20:25:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:52.644 20:25:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:52.644 20:25:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.644 20:25:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.644 20:25:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.644 20:25:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:52.644 00:08:52.644 real 0m18.877s 00:08:52.644 user 0m30.943s 00:08:52.644 sys 0m2.633s 00:08:52.644 20:25:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:52.644 ************************************ 00:08:52.644 END TEST nvmf_ns_masking 00:08:52.644 ************************************ 00:08:52.644 20:25:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:08:52.644 20:25:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:52.644 20:25:14 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:08:52.644 20:25:14 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:08:52.644 20:25:14 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:52.644 20:25:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:52.644 20:25:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.644 20:25:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:52.644 ************************************ 00:08:52.644 START TEST nvmf_host_management 00:08:52.644 ************************************ 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:52.644 * Looking for test storage... 00:08:52.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.644 20:25:14 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:52.645 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:52.903 Cannot find device "nvmf_tgt_br" 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:52.903 Cannot find device "nvmf_tgt_br2" 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:52.903 Cannot find device "nvmf_tgt_br" 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:52.903 Cannot find device "nvmf_tgt_br2" 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:52.903 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:52.903 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:52.903 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:53.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:08:53.162 00:08:53.162 --- 10.0.0.2 ping statistics --- 00:08:53.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.162 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:53.162 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:53.162 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:08:53.162 00:08:53.162 --- 10.0.0.3 ping statistics --- 00:08:53.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.162 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:53.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:53.162 00:08:53.162 --- 10.0.0.1 ping statistics --- 00:08:53.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.162 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=72652 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 72652 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72652 ']' 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:53.162 20:25:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:53.162 [2024-07-15 20:25:14.528121] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:08:53.162 [2024-07-15 20:25:14.528218] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.420 [2024-07-15 20:25:14.670779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.420 [2024-07-15 20:25:14.743160] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.420 [2024-07-15 20:25:14.743212] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.420 [2024-07-15 20:25:14.743225] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.420 [2024-07-15 20:25:14.743235] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.420 [2024-07-15 20:25:14.743243] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.420 [2024-07-15 20:25:14.743425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.420 [2024-07-15 20:25:14.743472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.420 [2024-07-15 20:25:14.743559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:53.420 [2024-07-15 20:25:14.743566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.354 [2024-07-15 20:25:15.561281] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.354 Malloc0 00:08:54.354 [2024-07-15 20:25:15.623561] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=72725 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 72725 /var/tmp/bdevperf.sock 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 72725 ']' 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:54.354 { 00:08:54.354 "params": { 00:08:54.354 "name": "Nvme$subsystem", 00:08:54.354 "trtype": "$TEST_TRANSPORT", 00:08:54.354 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:54.354 "adrfam": "ipv4", 00:08:54.354 "trsvcid": "$NVMF_PORT", 00:08:54.354 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:54.354 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:54.354 "hdgst": ${hdgst:-false}, 00:08:54.354 "ddgst": ${ddgst:-false} 00:08:54.354 }, 00:08:54.354 "method": "bdev_nvme_attach_controller" 00:08:54.354 } 00:08:54.354 EOF 00:08:54.354 )") 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:54.354 20:25:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:54.354 "params": { 00:08:54.354 "name": "Nvme0", 00:08:54.355 "trtype": "tcp", 00:08:54.355 "traddr": "10.0.0.2", 00:08:54.355 "adrfam": "ipv4", 00:08:54.355 "trsvcid": "4420", 00:08:54.355 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:54.355 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:54.355 "hdgst": false, 00:08:54.355 "ddgst": false 00:08:54.355 }, 00:08:54.355 "method": "bdev_nvme_attach_controller" 00:08:54.355 }' 00:08:54.355 [2024-07-15 20:25:15.727781] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:08:54.355 [2024-07-15 20:25:15.727891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72725 ] 00:08:54.613 [2024-07-15 20:25:15.864247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.613 [2024-07-15 20:25:15.924328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.613 Running I/O for 10 seconds... 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:54.872 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:55.132 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:55.132 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:55.132 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:55.132 20:25:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.132 20:25:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:55.132 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:55.132 20:25:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.132 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:08:55.132 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:08:55.132 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:55.132 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:55.132 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:55.132 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:55.132 20:25:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.132 20:25:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:55.132 [2024-07-15 20:25:16.508547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.508598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.508623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.508634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.508654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.508665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.508678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.508687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.508698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.508708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.508719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.508728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.508740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.508749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.508760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.508770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.508781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.508791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.508802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.508812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.508823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.508832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.508844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.508853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.508864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.508891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.508905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.508914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.508926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.508935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.508947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.508956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.508967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.508977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.508992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.509002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.509013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.509023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.509035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.509044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.509056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.509066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.509077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.509091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.509104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.509113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.509124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.509134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.509145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.509154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.509167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.509176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.509187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.509197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.509208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.509217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.509228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.132 [2024-07-15 20:25:16.509237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.132 [2024-07-15 20:25:16.509249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.133 [2024-07-15 20:25:16.509952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.133 [2024-07-15 20:25:16.509962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.134 [2024-07-15 20:25:16.509974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:55.134 [2024-07-15 20:25:16.509983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.134 [2024-07-15 20:25:16.510045] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1591820 was disconnected and freed. reset controller. 00:08:55.134 [2024-07-15 20:25:16.511232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:55.134 task offset: 77696 on job bdev=Nvme0n1 fails 00:08:55.134 00:08:55.134 Latency(us) 00:08:55.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.134 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:55.134 Job: Nvme0n1 ended in about 0.44 seconds with error 00:08:55.134 Verification LBA range: start 0x0 length 0x400 00:08:55.134 Nvme0n1 : 0.44 1299.35 81.21 144.37 0.00 42578.04 2115.03 43849.54 00:08:55.134 =================================================================================================================== 00:08:55.134 Total : 1299.35 81.21 144.37 0.00 42578.04 2115.03 43849.54 00:08:55.134 20:25:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.134 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:55.134 [2024-07-15 20:25:16.513294] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:55.134 [2024-07-15 20:25:16.513319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1591af0 (9): Bad file descriptor 00:08:55.134 20:25:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.134 20:25:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:55.134 20:25:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.134 20:25:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:55.134 [2024-07-15 20:25:16.525994] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:56.070 20:25:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 72725 00:08:56.070 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (72725) - No such process 00:08:56.070 20:25:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:56.070 20:25:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:56.070 20:25:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:56.070 20:25:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:56.070 20:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:56.070 20:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:56.070 20:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:56.070 20:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:56.070 { 00:08:56.070 "params": { 00:08:56.070 "name": "Nvme$subsystem", 00:08:56.070 "trtype": "$TEST_TRANSPORT", 00:08:56.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:56.070 "adrfam": "ipv4", 00:08:56.070 "trsvcid": "$NVMF_PORT", 00:08:56.070 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:56.070 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:56.070 "hdgst": ${hdgst:-false}, 00:08:56.070 "ddgst": ${ddgst:-false} 00:08:56.070 }, 00:08:56.070 "method": "bdev_nvme_attach_controller" 00:08:56.070 } 00:08:56.070 EOF 00:08:56.070 )") 00:08:56.070 20:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:56.070 20:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:56.070 20:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:56.070 20:25:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:56.070 "params": { 00:08:56.070 "name": "Nvme0", 00:08:56.070 "trtype": "tcp", 00:08:56.070 "traddr": "10.0.0.2", 00:08:56.070 "adrfam": "ipv4", 00:08:56.070 "trsvcid": "4420", 00:08:56.070 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:56.070 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:56.070 "hdgst": false, 00:08:56.070 "ddgst": false 00:08:56.070 }, 00:08:56.070 "method": "bdev_nvme_attach_controller" 00:08:56.070 }' 00:08:56.329 [2024-07-15 20:25:17.577991] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:08:56.329 [2024-07-15 20:25:17.578079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72767 ] 00:08:56.329 [2024-07-15 20:25:17.709557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.329 [2024-07-15 20:25:17.783842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.600 Running I/O for 1 seconds... 00:08:57.530 00:08:57.530 Latency(us) 00:08:57.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.530 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:57.530 Verification LBA range: start 0x0 length 0x400 00:08:57.530 Nvme0n1 : 1.01 1461.66 91.35 0.00 0.00 42896.77 4557.73 37891.72 00:08:57.530 =================================================================================================================== 00:08:57.530 Total : 1461.66 91.35 0.00 0.00 42896.77 4557.73 37891.72 00:08:57.788 20:25:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:57.788 20:25:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:57.788 20:25:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:57.788 20:25:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:57.788 20:25:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:57.788 20:25:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:57.788 20:25:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:57.788 20:25:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:57.788 20:25:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:57.788 20:25:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:57.788 20:25:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:57.788 rmmod nvme_tcp 00:08:57.788 rmmod nvme_fabrics 00:08:57.788 rmmod nvme_keyring 00:08:57.788 20:25:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:57.789 20:25:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:57.789 20:25:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:57.789 20:25:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 72652 ']' 00:08:57.789 20:25:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 72652 00:08:57.789 20:25:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 72652 ']' 00:08:57.789 20:25:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 72652 00:08:57.789 20:25:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:08:57.789 20:25:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:57.789 20:25:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72652 00:08:57.789 20:25:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:57.789 20:25:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:57.789 killing process with pid 72652 00:08:57.789 20:25:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72652' 00:08:57.789 20:25:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 72652 00:08:57.789 20:25:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 72652 00:08:58.048 [2024-07-15 20:25:19.365923] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:58.048 20:25:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:58.048 20:25:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:58.048 20:25:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:58.048 20:25:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:58.048 20:25:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:58.048 20:25:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.048 20:25:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.048 20:25:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.048 20:25:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:58.048 20:25:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:58.048 00:08:58.048 real 0m5.413s 00:08:58.048 user 0m20.907s 00:08:58.048 sys 0m1.151s 00:08:58.048 20:25:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.049 20:25:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:58.049 ************************************ 00:08:58.049 END TEST nvmf_host_management 00:08:58.049 ************************************ 00:08:58.049 20:25:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:58.049 20:25:19 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:58.049 20:25:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:58.049 20:25:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.049 20:25:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:58.049 ************************************ 00:08:58.049 START TEST nvmf_lvol 00:08:58.049 ************************************ 00:08:58.049 20:25:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:58.309 * Looking for test storage... 00:08:58.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:58.309 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:58.310 Cannot find device "nvmf_tgt_br" 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:58.310 Cannot find device "nvmf_tgt_br2" 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:58.310 Cannot find device "nvmf_tgt_br" 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:58.310 Cannot find device "nvmf_tgt_br2" 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:58.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:58.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:58.310 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:58.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:08:58.568 00:08:58.568 --- 10.0.0.2 ping statistics --- 00:08:58.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.568 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:58.568 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:58.568 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:08:58.568 00:08:58.568 --- 10.0.0.3 ping statistics --- 00:08:58.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.568 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:58.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:08:58.568 00:08:58.568 --- 10.0.0.1 ping statistics --- 00:08:58.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.568 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=72980 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 72980 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 72980 ']' 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:58.568 20:25:19 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:58.568 [2024-07-15 20:25:20.004294] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:08:58.568 [2024-07-15 20:25:20.004379] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.827 [2024-07-15 20:25:20.139714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:58.827 [2024-07-15 20:25:20.208132] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.827 [2024-07-15 20:25:20.208193] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.827 [2024-07-15 20:25:20.208206] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.827 [2024-07-15 20:25:20.208216] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.827 [2024-07-15 20:25:20.208225] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.827 [2024-07-15 20:25:20.208387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.827 [2024-07-15 20:25:20.209216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.827 [2024-07-15 20:25:20.209265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.827 20:25:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.827 20:25:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:08:58.827 20:25:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:58.827 20:25:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:58.827 20:25:20 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:58.827 20:25:20 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.827 20:25:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:59.392 [2024-07-15 20:25:20.588861] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.392 20:25:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:59.651 20:25:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:59.651 20:25:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:59.910 20:25:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:59.910 20:25:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:00.169 20:25:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:00.428 20:25:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d24d429d-8616-4509-8228-b01383493406 00:09:00.428 20:25:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d24d429d-8616-4509-8228-b01383493406 lvol 20 00:09:00.686 20:25:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7cb84f70-7d21-44f6-8c67-e7ca1e1e418e 00:09:00.686 20:25:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:00.944 20:25:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7cb84f70-7d21-44f6-8c67-e7ca1e1e418e 00:09:01.202 20:25:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:01.460 [2024-07-15 20:25:22.913136] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.461 20:25:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:02.025 20:25:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=73120 00:09:02.025 20:25:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:02.025 20:25:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:02.958 20:25:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 7cb84f70-7d21-44f6-8c67-e7ca1e1e418e MY_SNAPSHOT 00:09:03.217 20:25:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6f37cc65-6573-4f1c-b963-c6288960c920 00:09:03.217 20:25:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 7cb84f70-7d21-44f6-8c67-e7ca1e1e418e 30 00:09:03.473 20:25:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 6f37cc65-6573-4f1c-b963-c6288960c920 MY_CLONE 00:09:04.037 20:25:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=970a7164-e1a4-4339-9d91-8b332b73021f 00:09:04.037 20:25:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 970a7164-e1a4-4339-9d91-8b332b73021f 00:09:04.648 20:25:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 73120 00:09:12.766 Initializing NVMe Controllers 00:09:12.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:12.766 Controller IO queue size 128, less than required. 00:09:12.766 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:12.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:12.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:12.766 Initialization complete. Launching workers. 00:09:12.766 ======================================================== 00:09:12.766 Latency(us) 00:09:12.766 Device Information : IOPS MiB/s Average min max 00:09:12.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10443.70 40.80 12260.60 2123.37 58346.19 00:09:12.766 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10528.10 41.13 12161.56 3469.11 63821.08 00:09:12.766 ======================================================== 00:09:12.766 Total : 20971.80 81.92 12210.88 2123.37 63821.08 00:09:12.766 00:09:12.766 20:25:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:12.766 20:25:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7cb84f70-7d21-44f6-8c67-e7ca1e1e418e 00:09:12.766 20:25:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d24d429d-8616-4509-8228-b01383493406 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:13.024 rmmod nvme_tcp 00:09:13.024 rmmod nvme_fabrics 00:09:13.024 rmmod nvme_keyring 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 72980 ']' 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 72980 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 72980 ']' 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 72980 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72980 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:13.024 killing process with pid 72980 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72980' 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 72980 00:09:13.024 20:25:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 72980 00:09:13.282 20:25:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:13.282 20:25:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:13.282 20:25:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:13.282 20:25:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:13.282 20:25:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:13.282 20:25:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.282 20:25:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.282 20:25:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.282 20:25:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:13.282 00:09:13.282 real 0m15.244s 00:09:13.282 user 1m4.635s 00:09:13.282 sys 0m3.780s 00:09:13.282 20:25:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.282 ************************************ 00:09:13.282 END TEST nvmf_lvol 00:09:13.282 ************************************ 00:09:13.282 20:25:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:13.282 20:25:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:13.282 20:25:34 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:13.282 20:25:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:13.282 20:25:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.282 20:25:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:13.541 ************************************ 00:09:13.541 START TEST nvmf_lvs_grow 00:09:13.541 ************************************ 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:13.541 * Looking for test storage... 00:09:13.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:13.541 Cannot find device "nvmf_tgt_br" 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:13.541 Cannot find device "nvmf_tgt_br2" 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:13.541 Cannot find device "nvmf_tgt_br" 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:13.541 Cannot find device "nvmf_tgt_br2" 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:09:13.541 20:25:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:13.541 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:13.541 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:13.800 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:13.800 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:13.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:09:13.800 00:09:13.800 --- 10.0.0.2 ping statistics --- 00:09:13.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.800 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:13.800 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:13.800 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:09:13.800 00:09:13.800 --- 10.0.0.3 ping statistics --- 00:09:13.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.800 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:13.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:09:13.800 00:09:13.800 --- 10.0.0.1 ping statistics --- 00:09:13.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.800 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=73479 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 73479 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 73479 ']' 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:13.800 20:25:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:14.059 [2024-07-15 20:25:35.336882] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:09:14.059 [2024-07-15 20:25:35.336977] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.059 [2024-07-15 20:25:35.473591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.059 [2024-07-15 20:25:35.542160] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.059 [2024-07-15 20:25:35.542210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.059 [2024-07-15 20:25:35.542222] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.059 [2024-07-15 20:25:35.542233] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.059 [2024-07-15 20:25:35.542241] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.059 [2024-07-15 20:25:35.542268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.992 20:25:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:14.992 20:25:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:09:14.992 20:25:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:14.992 20:25:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:14.992 20:25:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:14.992 20:25:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.992 20:25:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:15.250 [2024-07-15 20:25:36.722513] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.250 20:25:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:15.250 20:25:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:15.250 20:25:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:15.250 20:25:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:15.507 ************************************ 00:09:15.507 START TEST lvs_grow_clean 00:09:15.507 ************************************ 00:09:15.507 20:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:09:15.507 20:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:15.507 20:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:15.507 20:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:15.507 20:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:15.507 20:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:15.507 20:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:15.507 20:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:15.507 20:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:15.507 20:25:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:15.765 20:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:15.765 20:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:16.023 20:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=645e3e69-b2f2-4986-8a72-eb620108eeaf 00:09:16.023 20:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:16.023 20:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 645e3e69-b2f2-4986-8a72-eb620108eeaf 00:09:16.281 20:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:16.281 20:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:16.281 20:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 645e3e69-b2f2-4986-8a72-eb620108eeaf lvol 150 00:09:16.539 20:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=07689386-4fdd-4028-aca0-a4788ea466a5 00:09:16.539 20:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:16.539 20:25:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:16.797 [2024-07-15 20:25:38.250103] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:16.797 [2024-07-15 20:25:38.250192] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:16.797 true 00:09:16.797 20:25:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:16.797 20:25:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 645e3e69-b2f2-4986-8a72-eb620108eeaf 00:09:17.364 20:25:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:17.364 20:25:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:17.622 20:25:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 07689386-4fdd-4028-aca0-a4788ea466a5 00:09:17.880 20:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:18.139 [2024-07-15 20:25:39.443867] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.139 20:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:18.398 20:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73652 00:09:18.398 20:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:18.398 20:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:18.398 20:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73652 /var/tmp/bdevperf.sock 00:09:18.398 20:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 73652 ']' 00:09:18.398 20:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:18.398 20:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:18.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:18.398 20:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:18.398 20:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:18.398 20:25:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:18.398 [2024-07-15 20:25:39.799157] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:09:18.398 [2024-07-15 20:25:39.799260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73652 ] 00:09:18.657 [2024-07-15 20:25:39.937495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.657 [2024-07-15 20:25:40.010949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.657 20:25:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:18.657 20:25:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:09:18.657 20:25:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:19.227 Nvme0n1 00:09:19.227 20:25:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:19.227 [ 00:09:19.227 { 00:09:19.227 "aliases": [ 00:09:19.227 "07689386-4fdd-4028-aca0-a4788ea466a5" 00:09:19.227 ], 00:09:19.227 "assigned_rate_limits": { 00:09:19.227 "r_mbytes_per_sec": 0, 00:09:19.227 "rw_ios_per_sec": 0, 00:09:19.228 "rw_mbytes_per_sec": 0, 00:09:19.228 "w_mbytes_per_sec": 0 00:09:19.228 }, 00:09:19.228 "block_size": 4096, 00:09:19.228 "claimed": false, 00:09:19.228 "driver_specific": { 00:09:19.228 "mp_policy": "active_passive", 00:09:19.228 "nvme": [ 00:09:19.228 { 00:09:19.228 "ctrlr_data": { 00:09:19.228 "ana_reporting": false, 00:09:19.228 "cntlid": 1, 00:09:19.228 "firmware_revision": "24.09", 00:09:19.228 "model_number": "SPDK bdev Controller", 00:09:19.228 "multi_ctrlr": true, 00:09:19.228 "oacs": { 00:09:19.228 "firmware": 0, 00:09:19.228 "format": 0, 00:09:19.228 "ns_manage": 0, 00:09:19.228 "security": 0 00:09:19.228 }, 00:09:19.228 "serial_number": "SPDK0", 00:09:19.228 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:19.228 "vendor_id": "0x8086" 00:09:19.228 }, 00:09:19.228 "ns_data": { 00:09:19.228 "can_share": true, 00:09:19.228 "id": 1 00:09:19.228 }, 00:09:19.228 "trid": { 00:09:19.228 "adrfam": "IPv4", 00:09:19.228 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:19.228 "traddr": "10.0.0.2", 00:09:19.228 "trsvcid": "4420", 00:09:19.228 "trtype": "TCP" 00:09:19.228 }, 00:09:19.228 "vs": { 00:09:19.228 "nvme_version": "1.3" 00:09:19.228 } 00:09:19.228 } 00:09:19.228 ] 00:09:19.228 }, 00:09:19.228 "memory_domains": [ 00:09:19.228 { 00:09:19.228 "dma_device_id": "system", 00:09:19.228 "dma_device_type": 1 00:09:19.228 } 00:09:19.228 ], 00:09:19.228 "name": "Nvme0n1", 00:09:19.228 "num_blocks": 38912, 00:09:19.228 "product_name": "NVMe disk", 00:09:19.228 "supported_io_types": { 00:09:19.228 "abort": true, 00:09:19.228 "compare": true, 00:09:19.228 "compare_and_write": true, 00:09:19.228 "copy": true, 00:09:19.228 "flush": true, 00:09:19.228 "get_zone_info": false, 00:09:19.228 "nvme_admin": true, 00:09:19.228 "nvme_io": true, 00:09:19.228 "nvme_io_md": false, 00:09:19.228 "nvme_iov_md": false, 00:09:19.228 "read": true, 00:09:19.228 "reset": true, 00:09:19.228 "seek_data": false, 00:09:19.228 "seek_hole": false, 00:09:19.228 "unmap": true, 00:09:19.228 "write": true, 00:09:19.228 "write_zeroes": true, 00:09:19.228 "zcopy": false, 00:09:19.228 "zone_append": false, 00:09:19.228 "zone_management": false 00:09:19.228 }, 00:09:19.228 "uuid": "07689386-4fdd-4028-aca0-a4788ea466a5", 00:09:19.228 "zoned": false 00:09:19.228 } 00:09:19.228 ] 00:09:19.505 20:25:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73686 00:09:19.505 20:25:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:19.505 20:25:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:19.505 Running I/O for 10 seconds... 00:09:20.437 Latency(us) 00:09:20.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.437 Nvme0n1 : 1.00 7926.00 30.96 0.00 0.00 0.00 0.00 0.00 00:09:20.437 =================================================================================================================== 00:09:20.437 Total : 7926.00 30.96 0.00 0.00 0.00 0.00 0.00 00:09:20.437 00:09:21.368 20:25:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 645e3e69-b2f2-4986-8a72-eb620108eeaf 00:09:21.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.626 Nvme0n1 : 2.00 7811.50 30.51 0.00 0.00 0.00 0.00 0.00 00:09:21.626 =================================================================================================================== 00:09:21.626 Total : 7811.50 30.51 0.00 0.00 0.00 0.00 0.00 00:09:21.626 00:09:21.626 true 00:09:21.626 20:25:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 645e3e69-b2f2-4986-8a72-eb620108eeaf 00:09:21.626 20:25:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:22.191 20:25:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:22.191 20:25:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:22.191 20:25:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 73686 00:09:22.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.449 Nvme0n1 : 3.00 7750.33 30.27 0.00 0.00 0.00 0.00 0.00 00:09:22.449 =================================================================================================================== 00:09:22.449 Total : 7750.33 30.27 0.00 0.00 0.00 0.00 0.00 00:09:22.449 00:09:23.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.383 Nvme0n1 : 4.00 7702.25 30.09 0.00 0.00 0.00 0.00 0.00 00:09:23.383 =================================================================================================================== 00:09:23.383 Total : 7702.25 30.09 0.00 0.00 0.00 0.00 0.00 00:09:23.383 00:09:24.758 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.758 Nvme0n1 : 5.00 7733.60 30.21 0.00 0.00 0.00 0.00 0.00 00:09:24.758 =================================================================================================================== 00:09:24.758 Total : 7733.60 30.21 0.00 0.00 0.00 0.00 0.00 00:09:24.758 00:09:25.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.693 Nvme0n1 : 6.00 7744.83 30.25 0.00 0.00 0.00 0.00 0.00 00:09:25.693 =================================================================================================================== 00:09:25.693 Total : 7744.83 30.25 0.00 0.00 0.00 0.00 0.00 00:09:25.693 00:09:26.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.628 Nvme0n1 : 7.00 7744.00 30.25 0.00 0.00 0.00 0.00 0.00 00:09:26.628 =================================================================================================================== 00:09:26.628 Total : 7744.00 30.25 0.00 0.00 0.00 0.00 0.00 00:09:26.628 00:09:27.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.612 Nvme0n1 : 8.00 7594.88 29.67 0.00 0.00 0.00 0.00 0.00 00:09:27.612 =================================================================================================================== 00:09:27.612 Total : 7594.88 29.67 0.00 0.00 0.00 0.00 0.00 00:09:27.612 00:09:28.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.546 Nvme0n1 : 9.00 7559.11 29.53 0.00 0.00 0.00 0.00 0.00 00:09:28.546 =================================================================================================================== 00:09:28.546 Total : 7559.11 29.53 0.00 0.00 0.00 0.00 0.00 00:09:28.546 00:09:29.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.481 Nvme0n1 : 10.00 7592.40 29.66 0.00 0.00 0.00 0.00 0.00 00:09:29.481 =================================================================================================================== 00:09:29.481 Total : 7592.40 29.66 0.00 0.00 0.00 0.00 0.00 00:09:29.481 00:09:29.481 00:09:29.481 Latency(us) 00:09:29.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.481 Nvme0n1 : 10.01 7595.75 29.67 0.00 0.00 16845.38 7626.01 52667.11 00:09:29.481 =================================================================================================================== 00:09:29.481 Total : 7595.75 29.67 0.00 0.00 16845.38 7626.01 52667.11 00:09:29.481 0 00:09:29.481 20:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73652 00:09:29.481 20:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 73652 ']' 00:09:29.481 20:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 73652 00:09:29.481 20:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:09:29.481 20:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:29.481 20:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73652 00:09:29.481 killing process with pid 73652 00:09:29.481 Received shutdown signal, test time was about 10.000000 seconds 00:09:29.481 00:09:29.481 Latency(us) 00:09:29.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.481 =================================================================================================================== 00:09:29.481 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:29.481 20:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:29.481 20:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:29.481 20:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73652' 00:09:29.481 20:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 73652 00:09:29.481 20:25:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 73652 00:09:29.740 20:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:29.998 20:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:30.257 20:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 645e3e69-b2f2-4986-8a72-eb620108eeaf 00:09:30.257 20:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:30.515 20:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:30.515 20:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:30.515 20:25:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:31.083 [2024-07-15 20:25:52.297535] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:31.083 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 645e3e69-b2f2-4986-8a72-eb620108eeaf 00:09:31.083 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:09:31.083 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 645e3e69-b2f2-4986-8a72-eb620108eeaf 00:09:31.083 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.083 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:31.083 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.083 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:31.083 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.083 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:31.083 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.083 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:31.083 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 645e3e69-b2f2-4986-8a72-eb620108eeaf 00:09:31.341 2024/07/15 20:25:52 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:645e3e69-b2f2-4986-8a72-eb620108eeaf], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:09:31.341 request: 00:09:31.341 { 00:09:31.341 "method": "bdev_lvol_get_lvstores", 00:09:31.341 "params": { 00:09:31.341 "uuid": "645e3e69-b2f2-4986-8a72-eb620108eeaf" 00:09:31.341 } 00:09:31.341 } 00:09:31.341 Got JSON-RPC error response 00:09:31.341 GoRPCClient: error on JSON-RPC call 00:09:31.341 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:09:31.341 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:31.341 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:31.341 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:31.341 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:31.599 aio_bdev 00:09:31.599 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 07689386-4fdd-4028-aca0-a4788ea466a5 00:09:31.599 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=07689386-4fdd-4028-aca0-a4788ea466a5 00:09:31.599 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:31.599 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:09:31.599 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:31.599 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:31.599 20:25:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:31.859 20:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 07689386-4fdd-4028-aca0-a4788ea466a5 -t 2000 00:09:32.117 [ 00:09:32.117 { 00:09:32.117 "aliases": [ 00:09:32.117 "lvs/lvol" 00:09:32.117 ], 00:09:32.117 "assigned_rate_limits": { 00:09:32.117 "r_mbytes_per_sec": 0, 00:09:32.117 "rw_ios_per_sec": 0, 00:09:32.117 "rw_mbytes_per_sec": 0, 00:09:32.117 "w_mbytes_per_sec": 0 00:09:32.117 }, 00:09:32.117 "block_size": 4096, 00:09:32.117 "claimed": false, 00:09:32.117 "driver_specific": { 00:09:32.117 "lvol": { 00:09:32.117 "base_bdev": "aio_bdev", 00:09:32.117 "clone": false, 00:09:32.117 "esnap_clone": false, 00:09:32.117 "lvol_store_uuid": "645e3e69-b2f2-4986-8a72-eb620108eeaf", 00:09:32.117 "num_allocated_clusters": 38, 00:09:32.117 "snapshot": false, 00:09:32.117 "thin_provision": false 00:09:32.117 } 00:09:32.117 }, 00:09:32.117 "name": "07689386-4fdd-4028-aca0-a4788ea466a5", 00:09:32.117 "num_blocks": 38912, 00:09:32.117 "product_name": "Logical Volume", 00:09:32.117 "supported_io_types": { 00:09:32.117 "abort": false, 00:09:32.117 "compare": false, 00:09:32.117 "compare_and_write": false, 00:09:32.117 "copy": false, 00:09:32.117 "flush": false, 00:09:32.117 "get_zone_info": false, 00:09:32.117 "nvme_admin": false, 00:09:32.117 "nvme_io": false, 00:09:32.117 "nvme_io_md": false, 00:09:32.117 "nvme_iov_md": false, 00:09:32.117 "read": true, 00:09:32.117 "reset": true, 00:09:32.117 "seek_data": true, 00:09:32.117 "seek_hole": true, 00:09:32.117 "unmap": true, 00:09:32.117 "write": true, 00:09:32.117 "write_zeroes": true, 00:09:32.117 "zcopy": false, 00:09:32.117 "zone_append": false, 00:09:32.117 "zone_management": false 00:09:32.117 }, 00:09:32.117 "uuid": "07689386-4fdd-4028-aca0-a4788ea466a5", 00:09:32.117 "zoned": false 00:09:32.117 } 00:09:32.117 ] 00:09:32.117 20:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:09:32.117 20:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 645e3e69-b2f2-4986-8a72-eb620108eeaf 00:09:32.117 20:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:32.374 20:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:32.374 20:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 645e3e69-b2f2-4986-8a72-eb620108eeaf 00:09:32.374 20:25:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:32.631 20:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:32.631 20:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 07689386-4fdd-4028-aca0-a4788ea466a5 00:09:32.889 20:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 645e3e69-b2f2-4986-8a72-eb620108eeaf 00:09:33.146 20:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:33.713 20:25:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:33.972 ************************************ 00:09:33.972 END TEST lvs_grow_clean 00:09:33.972 ************************************ 00:09:33.972 00:09:33.972 real 0m18.569s 00:09:33.972 user 0m17.932s 00:09:33.972 sys 0m2.089s 00:09:33.972 20:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:33.972 20:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:33.972 20:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:09:33.972 20:25:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:33.972 20:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:33.972 20:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:33.972 20:25:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:33.972 ************************************ 00:09:33.972 START TEST lvs_grow_dirty 00:09:33.972 ************************************ 00:09:33.972 20:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:09:33.972 20:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:33.972 20:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:33.972 20:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:33.972 20:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:33.972 20:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:33.972 20:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:33.972 20:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:33.972 20:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:33.972 20:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:34.230 20:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:34.230 20:25:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:34.797 20:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=888e74f3-8402-412f-82e3-097e1fa99b9d 00:09:34.797 20:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 888e74f3-8402-412f-82e3-097e1fa99b9d 00:09:34.797 20:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:35.056 20:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:35.056 20:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:35.056 20:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 888e74f3-8402-412f-82e3-097e1fa99b9d lvol 150 00:09:35.336 20:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c9b895a9-8218-45d5-a827-203fc75a2929 00:09:35.336 20:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:35.336 20:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:35.602 [2024-07-15 20:25:56.898825] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:35.602 [2024-07-15 20:25:56.898915] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:35.602 true 00:09:35.602 20:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 888e74f3-8402-412f-82e3-097e1fa99b9d 00:09:35.602 20:25:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:35.860 20:25:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:35.860 20:25:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:36.119 20:25:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c9b895a9-8218-45d5-a827-203fc75a2929 00:09:36.685 20:25:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:36.685 [2024-07-15 20:25:58.155484] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.685 20:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:37.251 20:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=74095 00:09:37.251 20:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:37.251 20:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:37.251 20:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 74095 /var/tmp/bdevperf.sock 00:09:37.251 20:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74095 ']' 00:09:37.251 20:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:37.251 20:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:37.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:37.251 20:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:37.251 20:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:37.251 20:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:37.251 [2024-07-15 20:25:58.557938] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:09:37.251 [2024-07-15 20:25:58.558044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74095 ] 00:09:37.251 [2024-07-15 20:25:58.694033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.509 [2024-07-15 20:25:58.753981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.509 20:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.509 20:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:37.509 20:25:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:37.767 Nvme0n1 00:09:37.767 20:25:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:38.025 [ 00:09:38.025 { 00:09:38.025 "aliases": [ 00:09:38.025 "c9b895a9-8218-45d5-a827-203fc75a2929" 00:09:38.025 ], 00:09:38.025 "assigned_rate_limits": { 00:09:38.025 "r_mbytes_per_sec": 0, 00:09:38.025 "rw_ios_per_sec": 0, 00:09:38.025 "rw_mbytes_per_sec": 0, 00:09:38.025 "w_mbytes_per_sec": 0 00:09:38.025 }, 00:09:38.025 "block_size": 4096, 00:09:38.025 "claimed": false, 00:09:38.025 "driver_specific": { 00:09:38.025 "mp_policy": "active_passive", 00:09:38.025 "nvme": [ 00:09:38.025 { 00:09:38.025 "ctrlr_data": { 00:09:38.025 "ana_reporting": false, 00:09:38.025 "cntlid": 1, 00:09:38.025 "firmware_revision": "24.09", 00:09:38.025 "model_number": "SPDK bdev Controller", 00:09:38.025 "multi_ctrlr": true, 00:09:38.025 "oacs": { 00:09:38.025 "firmware": 0, 00:09:38.025 "format": 0, 00:09:38.025 "ns_manage": 0, 00:09:38.025 "security": 0 00:09:38.025 }, 00:09:38.025 "serial_number": "SPDK0", 00:09:38.025 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:38.025 "vendor_id": "0x8086" 00:09:38.025 }, 00:09:38.025 "ns_data": { 00:09:38.025 "can_share": true, 00:09:38.025 "id": 1 00:09:38.025 }, 00:09:38.025 "trid": { 00:09:38.025 "adrfam": "IPv4", 00:09:38.025 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:38.025 "traddr": "10.0.0.2", 00:09:38.025 "trsvcid": "4420", 00:09:38.025 "trtype": "TCP" 00:09:38.025 }, 00:09:38.025 "vs": { 00:09:38.025 "nvme_version": "1.3" 00:09:38.025 } 00:09:38.025 } 00:09:38.025 ] 00:09:38.025 }, 00:09:38.025 "memory_domains": [ 00:09:38.025 { 00:09:38.025 "dma_device_id": "system", 00:09:38.025 "dma_device_type": 1 00:09:38.025 } 00:09:38.025 ], 00:09:38.025 "name": "Nvme0n1", 00:09:38.025 "num_blocks": 38912, 00:09:38.025 "product_name": "NVMe disk", 00:09:38.025 "supported_io_types": { 00:09:38.025 "abort": true, 00:09:38.025 "compare": true, 00:09:38.025 "compare_and_write": true, 00:09:38.025 "copy": true, 00:09:38.025 "flush": true, 00:09:38.025 "get_zone_info": false, 00:09:38.025 "nvme_admin": true, 00:09:38.025 "nvme_io": true, 00:09:38.025 "nvme_io_md": false, 00:09:38.025 "nvme_iov_md": false, 00:09:38.025 "read": true, 00:09:38.025 "reset": true, 00:09:38.025 "seek_data": false, 00:09:38.025 "seek_hole": false, 00:09:38.025 "unmap": true, 00:09:38.025 "write": true, 00:09:38.025 "write_zeroes": true, 00:09:38.025 "zcopy": false, 00:09:38.025 "zone_append": false, 00:09:38.025 "zone_management": false 00:09:38.025 }, 00:09:38.025 "uuid": "c9b895a9-8218-45d5-a827-203fc75a2929", 00:09:38.025 "zoned": false 00:09:38.025 } 00:09:38.025 ] 00:09:38.283 20:25:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74129 00:09:38.283 20:25:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:38.283 20:25:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:38.283 Running I/O for 10 seconds... 00:09:39.217 Latency(us) 00:09:39.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.217 Nvme0n1 : 1.00 7190.00 28.09 0.00 0.00 0.00 0.00 0.00 00:09:39.217 =================================================================================================================== 00:09:39.217 Total : 7190.00 28.09 0.00 0.00 0.00 0.00 0.00 00:09:39.217 00:09:40.151 20:26:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 888e74f3-8402-412f-82e3-097e1fa99b9d 00:09:40.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.409 Nvme0n1 : 2.00 6916.50 27.02 0.00 0.00 0.00 0.00 0.00 00:09:40.409 =================================================================================================================== 00:09:40.409 Total : 6916.50 27.02 0.00 0.00 0.00 0.00 0.00 00:09:40.409 00:09:40.666 true 00:09:40.666 20:26:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 888e74f3-8402-412f-82e3-097e1fa99b9d 00:09:40.666 20:26:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:40.925 20:26:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:40.925 20:26:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:40.925 20:26:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 74129 00:09:41.183 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.183 Nvme0n1 : 3.00 6592.00 25.75 0.00 0.00 0.00 0.00 0.00 00:09:41.183 =================================================================================================================== 00:09:41.183 Total : 6592.00 25.75 0.00 0.00 0.00 0.00 0.00 00:09:41.183 00:09:42.570 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.570 Nvme0n1 : 4.00 6690.75 26.14 0.00 0.00 0.00 0.00 0.00 00:09:42.570 =================================================================================================================== 00:09:42.570 Total : 6690.75 26.14 0.00 0.00 0.00 0.00 0.00 00:09:42.570 00:09:43.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.499 Nvme0n1 : 5.00 6845.60 26.74 0.00 0.00 0.00 0.00 0.00 00:09:43.499 =================================================================================================================== 00:09:43.499 Total : 6845.60 26.74 0.00 0.00 0.00 0.00 0.00 00:09:43.499 00:09:44.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.428 Nvme0n1 : 6.00 6891.67 26.92 0.00 0.00 0.00 0.00 0.00 00:09:44.428 =================================================================================================================== 00:09:44.428 Total : 6891.67 26.92 0.00 0.00 0.00 0.00 0.00 00:09:44.428 00:09:45.359 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.359 Nvme0n1 : 7.00 6930.00 27.07 0.00 0.00 0.00 0.00 0.00 00:09:45.359 =================================================================================================================== 00:09:45.359 Total : 6930.00 27.07 0.00 0.00 0.00 0.00 0.00 00:09:45.359 00:09:46.292 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.292 Nvme0n1 : 8.00 6667.25 26.04 0.00 0.00 0.00 0.00 0.00 00:09:46.292 =================================================================================================================== 00:09:46.292 Total : 6667.25 26.04 0.00 0.00 0.00 0.00 0.00 00:09:46.292 00:09:47.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.225 Nvme0n1 : 9.00 6604.78 25.80 0.00 0.00 0.00 0.00 0.00 00:09:47.225 =================================================================================================================== 00:09:47.225 Total : 6604.78 25.80 0.00 0.00 0.00 0.00 0.00 00:09:47.225 00:09:48.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.596 Nvme0n1 : 10.00 6585.20 25.72 0.00 0.00 0.00 0.00 0.00 00:09:48.596 =================================================================================================================== 00:09:48.596 Total : 6585.20 25.72 0.00 0.00 0.00 0.00 0.00 00:09:48.596 00:09:48.596 00:09:48.596 Latency(us) 00:09:48.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.596 Nvme0n1 : 10.02 6585.56 25.72 0.00 0.00 19418.97 2412.92 276442.76 00:09:48.596 =================================================================================================================== 00:09:48.596 Total : 6585.56 25.72 0.00 0.00 19418.97 2412.92 276442.76 00:09:48.596 0 00:09:48.596 20:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 74095 00:09:48.596 20:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 74095 ']' 00:09:48.596 20:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 74095 00:09:48.596 20:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:09:48.596 20:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:48.596 20:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74095 00:09:48.596 20:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:48.596 20:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:48.596 killing process with pid 74095 00:09:48.596 20:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74095' 00:09:48.596 20:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 74095 00:09:48.596 Received shutdown signal, test time was about 10.000000 seconds 00:09:48.596 00:09:48.596 Latency(us) 00:09:48.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.596 =================================================================================================================== 00:09:48.596 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:48.596 20:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 74095 00:09:48.596 20:26:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:48.853 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:49.112 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 888e74f3-8402-412f-82e3-097e1fa99b9d 00:09:49.112 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:49.369 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:49.369 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:49.369 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 73479 00:09:49.369 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 73479 00:09:49.369 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 73479 Killed "${NVMF_APP[@]}" "$@" 00:09:49.369 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:49.370 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:49.370 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:49.370 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:49.370 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:49.370 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=74292 00:09:49.370 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:49.370 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 74292 00:09:49.370 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 74292 ']' 00:09:49.370 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.370 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:49.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.370 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.370 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:49.370 20:26:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:49.628 [2024-07-15 20:26:10.886170] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:09:49.628 [2024-07-15 20:26:10.886296] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.628 [2024-07-15 20:26:11.036246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.628 [2024-07-15 20:26:11.123019] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.628 [2024-07-15 20:26:11.123108] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.628 [2024-07-15 20:26:11.123133] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.628 [2024-07-15 20:26:11.123149] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.628 [2024-07-15 20:26:11.123160] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.628 [2024-07-15 20:26:11.123204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.579 20:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:50.579 20:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:50.579 20:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:50.579 20:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:50.579 20:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:50.579 20:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.579 20:26:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:50.837 [2024-07-15 20:26:12.213480] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:50.837 [2024-07-15 20:26:12.213775] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:50.837 [2024-07-15 20:26:12.213937] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:50.837 20:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:50.837 20:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c9b895a9-8218-45d5-a827-203fc75a2929 00:09:50.837 20:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=c9b895a9-8218-45d5-a827-203fc75a2929 00:09:50.837 20:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:50.837 20:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:50.837 20:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:50.837 20:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:50.837 20:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:51.403 20:26:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c9b895a9-8218-45d5-a827-203fc75a2929 -t 2000 00:09:51.661 [ 00:09:51.661 { 00:09:51.661 "aliases": [ 00:09:51.661 "lvs/lvol" 00:09:51.661 ], 00:09:51.661 "assigned_rate_limits": { 00:09:51.661 "r_mbytes_per_sec": 0, 00:09:51.661 "rw_ios_per_sec": 0, 00:09:51.661 "rw_mbytes_per_sec": 0, 00:09:51.661 "w_mbytes_per_sec": 0 00:09:51.661 }, 00:09:51.661 "block_size": 4096, 00:09:51.661 "claimed": false, 00:09:51.661 "driver_specific": { 00:09:51.661 "lvol": { 00:09:51.661 "base_bdev": "aio_bdev", 00:09:51.661 "clone": false, 00:09:51.661 "esnap_clone": false, 00:09:51.661 "lvol_store_uuid": "888e74f3-8402-412f-82e3-097e1fa99b9d", 00:09:51.661 "num_allocated_clusters": 38, 00:09:51.661 "snapshot": false, 00:09:51.661 "thin_provision": false 00:09:51.661 } 00:09:51.661 }, 00:09:51.661 "name": "c9b895a9-8218-45d5-a827-203fc75a2929", 00:09:51.661 "num_blocks": 38912, 00:09:51.661 "product_name": "Logical Volume", 00:09:51.661 "supported_io_types": { 00:09:51.661 "abort": false, 00:09:51.661 "compare": false, 00:09:51.661 "compare_and_write": false, 00:09:51.661 "copy": false, 00:09:51.661 "flush": false, 00:09:51.661 "get_zone_info": false, 00:09:51.661 "nvme_admin": false, 00:09:51.661 "nvme_io": false, 00:09:51.661 "nvme_io_md": false, 00:09:51.661 "nvme_iov_md": false, 00:09:51.661 "read": true, 00:09:51.661 "reset": true, 00:09:51.661 "seek_data": true, 00:09:51.661 "seek_hole": true, 00:09:51.661 "unmap": true, 00:09:51.661 "write": true, 00:09:51.661 "write_zeroes": true, 00:09:51.661 "zcopy": false, 00:09:51.661 "zone_append": false, 00:09:51.661 "zone_management": false 00:09:51.661 }, 00:09:51.661 "uuid": "c9b895a9-8218-45d5-a827-203fc75a2929", 00:09:51.661 "zoned": false 00:09:51.661 } 00:09:51.661 ] 00:09:51.661 20:26:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:51.661 20:26:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 888e74f3-8402-412f-82e3-097e1fa99b9d 00:09:51.661 20:26:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:51.919 20:26:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:51.919 20:26:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 888e74f3-8402-412f-82e3-097e1fa99b9d 00:09:51.919 20:26:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:52.484 20:26:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:52.484 20:26:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:52.742 [2024-07-15 20:26:14.107073] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:52.742 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 888e74f3-8402-412f-82e3-097e1fa99b9d 00:09:52.742 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:09:52.742 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 888e74f3-8402-412f-82e3-097e1fa99b9d 00:09:52.742 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.742 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:52.742 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.742 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:52.742 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.742 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:52.743 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:52.743 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:52.743 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 888e74f3-8402-412f-82e3-097e1fa99b9d 00:09:53.001 2024/07/15 20:26:14 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:888e74f3-8402-412f-82e3-097e1fa99b9d], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:09:53.001 request: 00:09:53.001 { 00:09:53.001 "method": "bdev_lvol_get_lvstores", 00:09:53.001 "params": { 00:09:53.001 "uuid": "888e74f3-8402-412f-82e3-097e1fa99b9d" 00:09:53.001 } 00:09:53.001 } 00:09:53.001 Got JSON-RPC error response 00:09:53.001 GoRPCClient: error on JSON-RPC call 00:09:53.001 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:09:53.002 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:53.002 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:53.002 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:53.002 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:53.260 aio_bdev 00:09:53.260 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c9b895a9-8218-45d5-a827-203fc75a2929 00:09:53.260 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=c9b895a9-8218-45d5-a827-203fc75a2929 00:09:53.260 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:53.260 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:53.260 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:53.260 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:53.260 20:26:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:53.827 20:26:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c9b895a9-8218-45d5-a827-203fc75a2929 -t 2000 00:09:54.085 [ 00:09:54.085 { 00:09:54.085 "aliases": [ 00:09:54.085 "lvs/lvol" 00:09:54.085 ], 00:09:54.085 "assigned_rate_limits": { 00:09:54.085 "r_mbytes_per_sec": 0, 00:09:54.085 "rw_ios_per_sec": 0, 00:09:54.085 "rw_mbytes_per_sec": 0, 00:09:54.085 "w_mbytes_per_sec": 0 00:09:54.085 }, 00:09:54.085 "block_size": 4096, 00:09:54.085 "claimed": false, 00:09:54.085 "driver_specific": { 00:09:54.085 "lvol": { 00:09:54.085 "base_bdev": "aio_bdev", 00:09:54.085 "clone": false, 00:09:54.085 "esnap_clone": false, 00:09:54.085 "lvol_store_uuid": "888e74f3-8402-412f-82e3-097e1fa99b9d", 00:09:54.085 "num_allocated_clusters": 38, 00:09:54.085 "snapshot": false, 00:09:54.085 "thin_provision": false 00:09:54.085 } 00:09:54.085 }, 00:09:54.085 "name": "c9b895a9-8218-45d5-a827-203fc75a2929", 00:09:54.085 "num_blocks": 38912, 00:09:54.085 "product_name": "Logical Volume", 00:09:54.085 "supported_io_types": { 00:09:54.085 "abort": false, 00:09:54.085 "compare": false, 00:09:54.085 "compare_and_write": false, 00:09:54.085 "copy": false, 00:09:54.085 "flush": false, 00:09:54.085 "get_zone_info": false, 00:09:54.085 "nvme_admin": false, 00:09:54.085 "nvme_io": false, 00:09:54.085 "nvme_io_md": false, 00:09:54.085 "nvme_iov_md": false, 00:09:54.085 "read": true, 00:09:54.085 "reset": true, 00:09:54.085 "seek_data": true, 00:09:54.085 "seek_hole": true, 00:09:54.085 "unmap": true, 00:09:54.085 "write": true, 00:09:54.085 "write_zeroes": true, 00:09:54.085 "zcopy": false, 00:09:54.085 "zone_append": false, 00:09:54.085 "zone_management": false 00:09:54.085 }, 00:09:54.085 "uuid": "c9b895a9-8218-45d5-a827-203fc75a2929", 00:09:54.085 "zoned": false 00:09:54.085 } 00:09:54.085 ] 00:09:54.085 20:26:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:54.085 20:26:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 888e74f3-8402-412f-82e3-097e1fa99b9d 00:09:54.086 20:26:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:54.343 20:26:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:54.343 20:26:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 888e74f3-8402-412f-82e3-097e1fa99b9d 00:09:54.343 20:26:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:54.910 20:26:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:54.910 20:26:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c9b895a9-8218-45d5-a827-203fc75a2929 00:09:55.168 20:26:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 888e74f3-8402-412f-82e3-097e1fa99b9d 00:09:55.427 20:26:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:55.718 20:26:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:56.283 ************************************ 00:09:56.283 END TEST lvs_grow_dirty 00:09:56.283 ************************************ 00:09:56.283 00:09:56.283 real 0m22.151s 00:09:56.283 user 0m45.021s 00:09:56.283 sys 0m7.980s 00:09:56.283 20:26:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.283 20:26:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:56.283 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:09:56.283 20:26:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:56.283 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:09:56.283 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:09:56.284 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:56.284 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:56.284 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:56.284 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:56.284 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:56.284 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:56.284 nvmf_trace.0 00:09:56.284 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:09:56.284 20:26:17 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:56.284 20:26:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:56.284 20:26:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:56.284 20:26:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:56.284 20:26:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:56.284 20:26:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:56.284 20:26:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:56.284 rmmod nvme_tcp 00:09:56.284 rmmod nvme_fabrics 00:09:56.284 rmmod nvme_keyring 00:09:56.542 20:26:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:56.542 20:26:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:56.542 20:26:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:56.542 20:26:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 74292 ']' 00:09:56.542 20:26:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 74292 00:09:56.542 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 74292 ']' 00:09:56.542 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 74292 00:09:56.542 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:09:56.542 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:56.542 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74292 00:09:56.542 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:56.542 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:56.542 killing process with pid 74292 00:09:56.542 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74292' 00:09:56.542 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 74292 00:09:56.542 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 74292 00:09:56.543 20:26:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:56.543 20:26:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:56.543 20:26:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:56.543 20:26:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:56.543 20:26:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:56.543 20:26:17 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.543 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:56.543 20:26:17 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.543 20:26:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:56.543 00:09:56.543 real 0m43.248s 00:09:56.543 user 1m10.816s 00:09:56.543 sys 0m10.825s 00:09:56.543 20:26:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.543 20:26:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:56.543 ************************************ 00:09:56.543 END TEST nvmf_lvs_grow 00:09:56.543 ************************************ 00:09:56.801 20:26:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:56.801 20:26:18 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:56.801 20:26:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:56.801 20:26:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.801 20:26:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:56.801 ************************************ 00:09:56.801 START TEST nvmf_bdev_io_wait 00:09:56.801 ************************************ 00:09:56.801 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:56.801 * Looking for test storage... 00:09:56.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:56.801 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:56.801 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:56.801 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.801 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.801 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.801 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.801 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.801 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.801 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.801 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.801 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.801 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.801 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:09:56.801 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:09:56.801 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.801 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.801 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:56.802 Cannot find device "nvmf_tgt_br" 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:56.802 Cannot find device "nvmf_tgt_br2" 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:56.802 Cannot find device "nvmf_tgt_br" 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:56.802 Cannot find device "nvmf_tgt_br2" 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:56.802 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:57.060 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:57.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.060 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:57.061 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:57.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:57.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:09:57.061 00:09:57.061 --- 10.0.0.2 ping statistics --- 00:09:57.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.061 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:57.061 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:57.061 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:09:57.061 00:09:57.061 --- 10.0.0.3 ping statistics --- 00:09:57.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.061 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:57.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:57.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:09:57.061 00:09:57.061 --- 10.0.0.1 ping statistics --- 00:09:57.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:57.061 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=74728 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 74728 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 74728 ']' 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:57.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:57.061 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.320 [2024-07-15 20:26:18.628374] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:09:57.320 [2024-07-15 20:26:18.628514] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.320 [2024-07-15 20:26:18.771382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.578 [2024-07-15 20:26:18.839779] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.578 [2024-07-15 20:26:18.839853] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.578 [2024-07-15 20:26:18.839865] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.578 [2024-07-15 20:26:18.839888] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.578 [2024-07-15 20:26:18.839895] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.578 [2024-07-15 20:26:18.839979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.578 [2024-07-15 20:26:18.840065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.578 [2024-07-15 20:26:18.840243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.578 [2024-07-15 20:26:18.840263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.578 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:57.578 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:09:57.578 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:57.578 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:57.578 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.578 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.578 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:57.578 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.578 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.578 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.578 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:57.578 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.578 20:26:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.578 [2024-07-15 20:26:19.010479] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.578 Malloc0 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:57.578 [2024-07-15 20:26:19.061903] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=74763 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:57.578 { 00:09:57.578 "params": { 00:09:57.578 "name": "Nvme$subsystem", 00:09:57.578 "trtype": "$TEST_TRANSPORT", 00:09:57.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:57.578 "adrfam": "ipv4", 00:09:57.578 "trsvcid": "$NVMF_PORT", 00:09:57.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:57.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:57.578 "hdgst": ${hdgst:-false}, 00:09:57.578 "ddgst": ${ddgst:-false} 00:09:57.578 }, 00:09:57.578 "method": "bdev_nvme_attach_controller" 00:09:57.578 } 00:09:57.578 EOF 00:09:57.578 )") 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=74765 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=74768 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:57.578 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:57.578 { 00:09:57.578 "params": { 00:09:57.578 "name": "Nvme$subsystem", 00:09:57.578 "trtype": "$TEST_TRANSPORT", 00:09:57.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:57.578 "adrfam": "ipv4", 00:09:57.578 "trsvcid": "$NVMF_PORT", 00:09:57.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:57.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:57.578 "hdgst": ${hdgst:-false}, 00:09:57.578 "ddgst": ${ddgst:-false} 00:09:57.578 }, 00:09:57.578 "method": "bdev_nvme_attach_controller" 00:09:57.578 } 00:09:57.579 EOF 00:09:57.579 )") 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=74770 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:57.579 { 00:09:57.579 "params": { 00:09:57.579 "name": "Nvme$subsystem", 00:09:57.579 "trtype": "$TEST_TRANSPORT", 00:09:57.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:57.579 "adrfam": "ipv4", 00:09:57.579 "trsvcid": "$NVMF_PORT", 00:09:57.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:57.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:57.579 "hdgst": ${hdgst:-false}, 00:09:57.579 "ddgst": ${ddgst:-false} 00:09:57.579 }, 00:09:57.579 "method": "bdev_nvme_attach_controller" 00:09:57.579 } 00:09:57.579 EOF 00:09:57.579 )") 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:57.579 "params": { 00:09:57.579 "name": "Nvme1", 00:09:57.579 "trtype": "tcp", 00:09:57.579 "traddr": "10.0.0.2", 00:09:57.579 "adrfam": "ipv4", 00:09:57.579 "trsvcid": "4420", 00:09:57.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:57.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:57.579 "hdgst": false, 00:09:57.579 "ddgst": false 00:09:57.579 }, 00:09:57.579 "method": "bdev_nvme_attach_controller" 00:09:57.579 }' 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:57.579 { 00:09:57.579 "params": { 00:09:57.579 "name": "Nvme$subsystem", 00:09:57.579 "trtype": "$TEST_TRANSPORT", 00:09:57.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:57.579 "adrfam": "ipv4", 00:09:57.579 "trsvcid": "$NVMF_PORT", 00:09:57.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:57.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:57.579 "hdgst": ${hdgst:-false}, 00:09:57.579 "ddgst": ${ddgst:-false} 00:09:57.579 }, 00:09:57.579 "method": "bdev_nvme_attach_controller" 00:09:57.579 } 00:09:57.579 EOF 00:09:57.579 )") 00:09:57.579 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:57.579 "params": { 00:09:57.579 "name": "Nvme1", 00:09:57.579 "trtype": "tcp", 00:09:57.579 "traddr": "10.0.0.2", 00:09:57.579 "adrfam": "ipv4", 00:09:57.579 "trsvcid": "4420", 00:09:57.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:57.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:57.579 "hdgst": false, 00:09:57.579 "ddgst": false 00:09:57.579 }, 00:09:57.579 "method": "bdev_nvme_attach_controller" 00:09:57.579 }' 00:09:57.837 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:57.837 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:57.837 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:57.837 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:57.837 "params": { 00:09:57.837 "name": "Nvme1", 00:09:57.837 "trtype": "tcp", 00:09:57.837 "traddr": "10.0.0.2", 00:09:57.837 "adrfam": "ipv4", 00:09:57.837 "trsvcid": "4420", 00:09:57.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:57.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:57.837 "hdgst": false, 00:09:57.837 "ddgst": false 00:09:57.837 }, 00:09:57.837 "method": "bdev_nvme_attach_controller" 00:09:57.837 }' 00:09:57.837 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:57.837 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:57.837 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:57.837 "params": { 00:09:57.837 "name": "Nvme1", 00:09:57.837 "trtype": "tcp", 00:09:57.837 "traddr": "10.0.0.2", 00:09:57.837 "adrfam": "ipv4", 00:09:57.837 "trsvcid": "4420", 00:09:57.837 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:57.837 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:57.837 "hdgst": false, 00:09:57.837 "ddgst": false 00:09:57.837 }, 00:09:57.837 "method": "bdev_nvme_attach_controller" 00:09:57.837 }' 00:09:57.837 20:26:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 74763 00:09:57.837 [2024-07-15 20:26:19.140694] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:09:57.837 [2024-07-15 20:26:19.141043] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:57.837 [2024-07-15 20:26:19.143377] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:09:57.837 [2024-07-15 20:26:19.143482] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:57.837 [2024-07-15 20:26:19.161198] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:09:57.837 [2024-07-15 20:26:19.161226] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:09:57.837 [2024-07-15 20:26:19.161297] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:57.837 [2024-07-15 20:26:19.162146] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:58.096 [2024-07-15 20:26:19.356084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.096 [2024-07-15 20:26:19.377225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.096 [2024-07-15 20:26:19.416166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.096 [2024-07-15 20:26:19.427720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:58.096 [2024-07-15 20:26:19.448278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:58.096 [2024-07-15 20:26:19.461071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.096 [2024-07-15 20:26:19.467019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:58.096 [2024-07-15 20:26:19.531815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:58.096 Running I/O for 1 seconds... 00:09:58.096 Running I/O for 1 seconds... 00:09:58.096 Running I/O for 1 seconds... 00:09:58.353 Running I/O for 1 seconds... 00:09:59.288 00:09:59.288 Latency(us) 00:09:59.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.288 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:59.288 Nvme1n1 : 1.00 172886.00 675.34 0.00 0.00 737.43 333.27 4915.20 00:09:59.288 =================================================================================================================== 00:09:59.288 Total : 172886.00 675.34 0.00 0.00 737.43 333.27 4915.20 00:09:59.288 00:09:59.288 Latency(us) 00:09:59.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.288 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:59.288 Nvme1n1 : 1.01 6097.98 23.82 0.00 0.00 20863.48 2978.91 27167.65 00:09:59.288 =================================================================================================================== 00:09:59.288 Total : 6097.98 23.82 0.00 0.00 20863.48 2978.91 27167.65 00:09:59.288 00:09:59.288 Latency(us) 00:09:59.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.288 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:59.289 Nvme1n1 : 1.06 3678.50 14.37 0.00 0.00 33921.95 6762.12 68634.07 00:09:59.289 =================================================================================================================== 00:09:59.289 Total : 3678.50 14.37 0.00 0.00 33921.95 6762.12 68634.07 00:09:59.289 00:09:59.289 Latency(us) 00:09:59.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.289 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:59.289 Nvme1n1 : 1.01 3618.35 14.13 0.00 0.00 35122.34 13226.36 74830.20 00:09:59.289 =================================================================================================================== 00:09:59.289 Total : 3618.35 14.13 0.00 0.00 35122.34 13226.36 74830.20 00:09:59.547 20:26:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 74765 00:09:59.547 20:26:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 74768 00:09:59.547 20:26:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 74770 00:09:59.547 20:26:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:59.547 20:26:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.547 20:26:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.547 20:26:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.547 20:26:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:59.547 20:26:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:59.547 20:26:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:59.547 20:26:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:59.547 20:26:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:59.547 20:26:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:59.547 20:26:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:59.547 20:26:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:59.547 rmmod nvme_tcp 00:09:59.547 rmmod nvme_fabrics 00:09:59.547 rmmod nvme_keyring 00:09:59.547 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:59.547 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:59.547 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:59.547 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 74728 ']' 00:09:59.547 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 74728 00:09:59.547 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 74728 ']' 00:09:59.547 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 74728 00:09:59.547 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:09:59.547 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:59.547 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74728 00:09:59.547 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:59.547 killing process with pid 74728 00:09:59.547 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:59.547 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74728' 00:09:59.547 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 74728 00:09:59.547 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 74728 00:09:59.806 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:59.806 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:59.806 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:59.806 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:59.806 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:59.806 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.806 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:59.806 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.806 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:59.806 00:09:59.806 real 0m3.162s 00:09:59.806 user 0m14.316s 00:09:59.806 sys 0m1.716s 00:09:59.806 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:59.806 20:26:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:59.806 ************************************ 00:09:59.806 END TEST nvmf_bdev_io_wait 00:09:59.806 ************************************ 00:09:59.806 20:26:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:59.806 20:26:21 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:59.806 20:26:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:59.806 20:26:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.806 20:26:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:59.806 ************************************ 00:09:59.806 START TEST nvmf_queue_depth 00:09:59.806 ************************************ 00:09:59.806 20:26:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:00.065 * Looking for test storage... 00:10:00.065 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:00.065 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:00.066 Cannot find device "nvmf_tgt_br" 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:00.066 Cannot find device "nvmf_tgt_br2" 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:00.066 Cannot find device "nvmf_tgt_br" 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:00.066 Cannot find device "nvmf_tgt_br2" 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:00.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:00.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:00.066 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:00.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:10:00.325 00:10:00.325 --- 10.0.0.2 ping statistics --- 00:10:00.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.325 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:00.325 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:00.325 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:10:00.325 00:10:00.325 --- 10.0.0.3 ping statistics --- 00:10:00.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.325 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:00.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:10:00.325 00:10:00.325 --- 10.0.0.1 ping statistics --- 00:10:00.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.325 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:00.325 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.326 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:00.326 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:00.326 20:26:21 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:00.326 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:00.326 20:26:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:00.326 20:26:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.326 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=74974 00:10:00.326 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:00.326 20:26:21 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 74974 00:10:00.326 20:26:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 74974 ']' 00:10:00.326 20:26:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.326 20:26:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:00.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.326 20:26:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.326 20:26:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:00.326 20:26:21 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.326 [2024-07-15 20:26:21.818940] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:10:00.326 [2024-07-15 20:26:21.819033] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.585 [2024-07-15 20:26:21.959617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.585 [2024-07-15 20:26:22.018394] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.585 [2024-07-15 20:26:22.018445] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.585 [2024-07-15 20:26:22.018456] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.585 [2024-07-15 20:26:22.018465] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.585 [2024-07-15 20:26:22.018472] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.585 [2024-07-15 20:26:22.018502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.875 [2024-07-15 20:26:22.133928] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.875 Malloc0 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.875 [2024-07-15 20:26:22.180913] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=75010 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 75010 /var/tmp/bdevperf.sock 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 75010 ']' 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:00.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:00.875 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.875 [2024-07-15 20:26:22.248948] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:10:00.875 [2024-07-15 20:26:22.249611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75010 ] 00:10:01.159 [2024-07-15 20:26:22.390787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.159 [2024-07-15 20:26:22.476289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.159 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.159 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:10:01.159 20:26:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:01.159 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.159 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:01.416 NVMe0n1 00:10:01.416 20:26:22 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.416 20:26:22 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:01.416 Running I/O for 10 seconds... 00:10:11.473 00:10:11.473 Latency(us) 00:10:11.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.473 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:11.473 Verification LBA range: start 0x0 length 0x4000 00:10:11.473 NVMe0n1 : 10.10 7809.12 30.50 0.00 0.00 130514.80 29908.25 107240.73 00:10:11.473 =================================================================================================================== 00:10:11.473 Total : 7809.12 30.50 0.00 0.00 130514.80 29908.25 107240.73 00:10:11.473 0 00:10:11.473 20:26:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 75010 00:10:11.473 20:26:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 75010 ']' 00:10:11.473 20:26:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 75010 00:10:11.473 20:26:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:11.473 20:26:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:11.473 20:26:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75010 00:10:11.731 20:26:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:11.731 20:26:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:11.731 20:26:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75010' 00:10:11.731 killing process with pid 75010 00:10:11.731 Received shutdown signal, test time was about 10.000000 seconds 00:10:11.731 00:10:11.731 Latency(us) 00:10:11.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:11.731 =================================================================================================================== 00:10:11.731 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:11.731 20:26:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 75010 00:10:11.731 20:26:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 75010 00:10:11.731 20:26:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:11.731 20:26:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:11.731 20:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:11.731 20:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:11.988 20:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:11.988 20:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:11.988 20:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:11.988 20:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:11.988 rmmod nvme_tcp 00:10:11.988 rmmod nvme_fabrics 00:10:11.988 rmmod nvme_keyring 00:10:11.988 20:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:11.988 20:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:11.988 20:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:11.988 20:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 74974 ']' 00:10:11.988 20:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 74974 00:10:11.988 20:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 74974 ']' 00:10:11.988 20:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 74974 00:10:11.988 20:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:10:11.988 20:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:11.988 20:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74974 00:10:11.988 killing process with pid 74974 00:10:11.988 20:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:11.989 20:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:11.989 20:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74974' 00:10:11.989 20:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 74974 00:10:11.989 20:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 74974 00:10:12.246 20:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:12.246 20:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:12.246 20:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:12.246 20:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:12.246 20:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:12.246 20:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.246 20:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.246 20:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.246 20:26:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:12.246 00:10:12.246 real 0m12.257s 00:10:12.246 user 0m21.239s 00:10:12.246 sys 0m1.990s 00:10:12.246 20:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:12.246 20:26:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:12.246 ************************************ 00:10:12.246 END TEST nvmf_queue_depth 00:10:12.246 ************************************ 00:10:12.246 20:26:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:12.246 20:26:33 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:12.246 20:26:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:12.246 20:26:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:12.246 20:26:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:12.246 ************************************ 00:10:12.246 START TEST nvmf_target_multipath 00:10:12.246 ************************************ 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:12.246 * Looking for test storage... 00:10:12.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:12.246 20:26:33 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:12.247 Cannot find device "nvmf_tgt_br" 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:12.247 Cannot find device "nvmf_tgt_br2" 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:12.247 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:12.504 Cannot find device "nvmf_tgt_br" 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:12.504 Cannot find device "nvmf_tgt_br2" 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:12.504 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:12.504 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:12.504 20:26:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:12.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:10:12.762 00:10:12.762 --- 10.0.0.2 ping statistics --- 00:10:12.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.762 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:12.762 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:12.762 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:10:12.762 00:10:12.762 --- 10.0.0.3 ping statistics --- 00:10:12.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.762 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:12.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:10:12.762 00:10:12.762 --- 10.0.0.1 ping statistics --- 00:10:12.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.762 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=75324 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 75324 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 75324 ']' 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.762 20:26:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:12.762 [2024-07-15 20:26:34.143755] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:10:12.762 [2024-07-15 20:26:34.143921] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.038 [2024-07-15 20:26:34.289887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:13.038 [2024-07-15 20:26:34.364613] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.038 [2024-07-15 20:26:34.364977] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.038 [2024-07-15 20:26:34.365238] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.038 [2024-07-15 20:26:34.365740] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.038 [2024-07-15 20:26:34.366019] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.038 [2024-07-15 20:26:34.366374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.038 [2024-07-15 20:26:34.366449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.038 [2024-07-15 20:26:34.366511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.038 [2024-07-15 20:26:34.366736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.038 20:26:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.038 20:26:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:10:13.038 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:13.038 20:26:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:13.038 20:26:34 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:13.038 20:26:34 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.038 20:26:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:13.604 [2024-07-15 20:26:34.833332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.604 20:26:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:13.862 Malloc0 00:10:13.862 20:26:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:14.120 20:26:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:14.686 20:26:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.944 [2024-07-15 20:26:36.304754] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.944 20:26:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:15.203 [2024-07-15 20:26:36.561247] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:15.203 20:26:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:10:15.460 20:26:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:15.719 20:26:37 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:15.719 20:26:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:10:15.719 20:26:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:15.719 20:26:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:15.719 20:26:37 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:17.613 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:17.614 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:17.614 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:17.614 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:17.614 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:17.614 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:17.614 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:17.614 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:17.614 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:17.614 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=75458 00:10:17.614 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:17.614 20:26:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:17.614 [global] 00:10:17.614 thread=1 00:10:17.614 invalidate=1 00:10:17.614 rw=randrw 00:10:17.614 time_based=1 00:10:17.614 runtime=6 00:10:17.614 ioengine=libaio 00:10:17.614 direct=1 00:10:17.614 bs=4096 00:10:17.614 iodepth=128 00:10:17.614 norandommap=0 00:10:17.614 numjobs=1 00:10:17.614 00:10:17.614 verify_dump=1 00:10:17.614 verify_backlog=512 00:10:17.614 verify_state_save=0 00:10:17.614 do_verify=1 00:10:17.614 verify=crc32c-intel 00:10:17.614 [job0] 00:10:17.614 filename=/dev/nvme0n1 00:10:17.614 Could not set queue depth (nvme0n1) 00:10:17.870 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:17.870 fio-3.35 00:10:17.870 Starting 1 thread 00:10:18.804 20:26:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:19.061 20:26:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:19.319 20:26:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:19.319 20:26:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:19.319 20:26:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:19.319 20:26:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:19.319 20:26:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:19.319 20:26:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:19.319 20:26:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:19.319 20:26:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:19.319 20:26:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:19.319 20:26:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:19.319 20:26:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:19.319 20:26:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:19.319 20:26:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:20.252 20:26:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:20.252 20:26:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:20.252 20:26:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:20.252 20:26:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:20.510 20:26:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:21.074 20:26:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:21.074 20:26:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:21.074 20:26:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:21.074 20:26:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:21.074 20:26:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:21.074 20:26:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:21.074 20:26:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:21.074 20:26:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:21.074 20:26:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:21.074 20:26:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:21.074 20:26:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:21.074 20:26:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:21.074 20:26:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:22.008 20:26:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:22.008 20:26:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:22.008 20:26:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:22.008 20:26:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 75458 00:10:23.903 00:10:23.903 job0: (groupid=0, jobs=1): err= 0: pid=75480: Mon Jul 15 20:26:45 2024 00:10:23.903 read: IOPS=9430, BW=36.8MiB/s (38.6MB/s)(221MiB/6008msec) 00:10:23.903 slat (usec): min=2, max=6263, avg=60.41, stdev=278.72 00:10:23.903 clat (usec): min=562, max=39091, avg=9262.04, stdev=2089.68 00:10:23.903 lat (usec): min=620, max=39102, avg=9322.45, stdev=2105.48 00:10:23.903 clat percentiles (usec): 00:10:23.903 | 1.00th=[ 4817], 5.00th=[ 6652], 10.00th=[ 7242], 20.00th=[ 7701], 00:10:23.903 | 30.00th=[ 8094], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9503], 00:10:23.903 | 70.00th=[10028], 80.00th=[10683], 90.00th=[11600], 95.00th=[12518], 00:10:23.903 | 99.00th=[15139], 99.50th=[15926], 99.90th=[28181], 99.95th=[34341], 00:10:23.903 | 99.99th=[35914] 00:10:23.903 bw ( KiB/s): min= 9360, max=24848, per=52.10%, avg=19652.33, stdev=3837.56, samples=12 00:10:23.903 iops : min= 2340, max= 6212, avg=4913.08, stdev=959.39, samples=12 00:10:23.903 write: IOPS=5433, BW=21.2MiB/s (22.3MB/s)(115MiB/5438msec); 0 zone resets 00:10:23.903 slat (usec): min=3, max=3749, avg=73.75, stdev=184.76 00:10:23.903 clat (usec): min=510, max=35250, avg=8025.11, stdev=1950.59 00:10:23.903 lat (usec): min=548, max=35278, avg=8098.85, stdev=1960.04 00:10:23.903 clat percentiles (usec): 00:10:23.903 | 1.00th=[ 3818], 5.00th=[ 5407], 10.00th=[ 6259], 20.00th=[ 6783], 00:10:23.903 | 30.00th=[ 7177], 40.00th=[ 7504], 50.00th=[ 7832], 60.00th=[ 8225], 00:10:23.903 | 70.00th=[ 8717], 80.00th=[ 9372], 90.00th=[ 9896], 95.00th=[10421], 00:10:23.903 | 99.00th=[13042], 99.50th=[14746], 99.90th=[31065], 99.95th=[31589], 00:10:23.903 | 99.99th=[35390] 00:10:23.903 bw ( KiB/s): min= 9672, max=24840, per=90.42%, avg=19651.50, stdev=3657.04, samples=12 00:10:23.903 iops : min= 2418, max= 6210, avg=4912.83, stdev=914.22, samples=12 00:10:23.903 lat (usec) : 750=0.02%, 1000=0.01% 00:10:23.903 lat (msec) : 2=0.06%, 4=0.57%, 10=75.76%, 20=23.36%, 50=0.23% 00:10:23.903 cpu : usr=5.69%, sys=23.81%, ctx=5612, majf=0, minf=84 00:10:23.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:23.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:23.903 issued rwts: total=56657,29545,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.903 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:23.903 00:10:23.903 Run status group 0 (all jobs): 00:10:23.903 READ: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=221MiB (232MB), run=6008-6008msec 00:10:23.903 WRITE: bw=21.2MiB/s (22.3MB/s), 21.2MiB/s-21.2MiB/s (22.3MB/s-22.3MB/s), io=115MiB (121MB), run=5438-5438msec 00:10:23.903 00:10:23.903 Disk stats (read/write): 00:10:23.903 nvme0n1: ios=55747/29008, merge=0/0, ticks=482624/215974, in_queue=698598, util=98.60% 00:10:23.903 20:26:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:10:24.161 20:26:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:24.727 20:26:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:24.727 20:26:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:24.727 20:26:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:24.727 20:26:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:24.727 20:26:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:24.727 20:26:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:24.727 20:26:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:24.727 20:26:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:24.727 20:26:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:24.727 20:26:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:24.727 20:26:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:24.727 20:26:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:10:24.727 20:26:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:25.687 20:26:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:25.687 20:26:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:25.687 20:26:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:25.687 20:26:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:25.687 20:26:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=75614 00:10:25.687 20:26:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:25.687 20:26:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:25.687 [global] 00:10:25.687 thread=1 00:10:25.687 invalidate=1 00:10:25.687 rw=randrw 00:10:25.687 time_based=1 00:10:25.687 runtime=6 00:10:25.687 ioengine=libaio 00:10:25.687 direct=1 00:10:25.687 bs=4096 00:10:25.687 iodepth=128 00:10:25.687 norandommap=0 00:10:25.687 numjobs=1 00:10:25.687 00:10:25.687 verify_dump=1 00:10:25.687 verify_backlog=512 00:10:25.687 verify_state_save=0 00:10:25.687 do_verify=1 00:10:25.687 verify=crc32c-intel 00:10:25.687 [job0] 00:10:25.687 filename=/dev/nvme0n1 00:10:25.687 Could not set queue depth (nvme0n1) 00:10:25.687 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.687 fio-3.35 00:10:25.687 Starting 1 thread 00:10:26.621 20:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:26.879 20:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:27.446 20:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:27.446 20:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:27.446 20:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:27.446 20:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:27.446 20:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:27.446 20:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:27.446 20:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:27.446 20:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:27.446 20:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:27.446 20:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:27.446 20:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:27.446 20:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:27.446 20:26:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:28.379 20:26:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:28.379 20:26:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:28.379 20:26:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:28.379 20:26:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:28.944 20:26:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:29.203 20:26:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:29.203 20:26:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:29.203 20:26:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:29.203 20:26:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:29.203 20:26:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:29.203 20:26:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:29.203 20:26:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:29.203 20:26:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:29.203 20:26:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:29.203 20:26:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:29.203 20:26:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:29.203 20:26:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:29.203 20:26:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:10:30.137 20:26:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:10:30.137 20:26:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:30.137 20:26:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:30.137 20:26:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 75614 00:10:32.033 00:10:32.033 job0: (groupid=0, jobs=1): err= 0: pid=75635: Mon Jul 15 20:26:53 2024 00:10:32.033 read: IOPS=10.3k, BW=40.1MiB/s (42.1MB/s)(241MiB/6005msec) 00:10:32.033 slat (usec): min=2, max=6317, avg=48.97, stdev=239.17 00:10:32.033 clat (usec): min=260, max=21098, avg=8528.52, stdev=2772.42 00:10:32.033 lat (usec): min=308, max=21118, avg=8577.49, stdev=2790.22 00:10:32.033 clat percentiles (usec): 00:10:32.033 | 1.00th=[ 1254], 5.00th=[ 2835], 10.00th=[ 5276], 20.00th=[ 6783], 00:10:32.033 | 30.00th=[ 7504], 40.00th=[ 7767], 50.00th=[ 8455], 60.00th=[ 9241], 00:10:32.033 | 70.00th=[10159], 80.00th=[10814], 90.00th=[11863], 95.00th=[12649], 00:10:32.033 | 99.00th=[14877], 99.50th=[15795], 99.90th=[17695], 99.95th=[19006], 00:10:32.033 | 99.99th=[20317] 00:10:32.033 bw ( KiB/s): min= 9448, max=36544, per=52.72%, avg=21669.82, stdev=9016.99, samples=11 00:10:32.033 iops : min= 2362, max= 9136, avg=5417.45, stdev=2254.25, samples=11 00:10:32.033 write: IOPS=6277, BW=24.5MiB/s (25.7MB/s)(128MiB/5226msec); 0 zone resets 00:10:32.033 slat (usec): min=3, max=2463, avg=65.72, stdev=155.07 00:10:32.033 clat (usec): min=165, max=18760, avg=7186.86, stdev=2610.95 00:10:32.033 lat (usec): min=221, max=18804, avg=7252.58, stdev=2625.88 00:10:32.033 clat percentiles (usec): 00:10:32.033 | 1.00th=[ 971], 5.00th=[ 1876], 10.00th=[ 3621], 20.00th=[ 5014], 00:10:32.033 | 30.00th=[ 6063], 40.00th=[ 6849], 50.00th=[ 7373], 60.00th=[ 7963], 00:10:32.033 | 70.00th=[ 8848], 80.00th=[ 9634], 90.00th=[10159], 95.00th=[10683], 00:10:32.033 | 99.00th=[12256], 99.50th=[13829], 99.90th=[16188], 99.95th=[16712], 00:10:32.033 | 99.99th=[18482] 00:10:32.033 bw ( KiB/s): min= 9592, max=35624, per=86.60%, avg=21744.00, stdev=8822.09, samples=11 00:10:32.033 iops : min= 2398, max= 8906, avg=5436.00, stdev=2205.52, samples=11 00:10:32.033 lat (usec) : 250=0.01%, 500=0.07%, 750=0.22%, 1000=0.43% 00:10:32.033 lat (msec) : 2=3.21%, 4=4.63%, 10=66.44%, 20=24.98%, 50=0.02% 00:10:32.033 cpu : usr=5.61%, sys=27.90%, ctx=8048, majf=0, minf=133 00:10:32.033 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:32.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.033 issued rwts: total=61705,32804,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.033 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.033 00:10:32.033 Run status group 0 (all jobs): 00:10:32.033 READ: bw=40.1MiB/s (42.1MB/s), 40.1MiB/s-40.1MiB/s (42.1MB/s-42.1MB/s), io=241MiB (253MB), run=6005-6005msec 00:10:32.033 WRITE: bw=24.5MiB/s (25.7MB/s), 24.5MiB/s-24.5MiB/s (25.7MB/s-25.7MB/s), io=128MiB (134MB), run=5226-5226msec 00:10:32.033 00:10:32.033 Disk stats (read/write): 00:10:32.033 nvme0n1: ios=60689/32281, merge=0/0, ticks=476057/207002, in_queue=683059, util=98.63% 00:10:32.033 20:26:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:32.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:32.033 20:26:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:32.033 20:26:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:10:32.033 20:26:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:32.033 20:26:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.033 20:26:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:32.033 20:26:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.033 20:26:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:10:32.033 20:26:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:32.290 20:26:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:32.290 20:26:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:32.290 20:26:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:32.290 20:26:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:32.290 20:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:32.290 20:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:32.290 20:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:32.290 20:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:32.290 20:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:32.290 20:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:32.290 rmmod nvme_tcp 00:10:32.290 rmmod nvme_fabrics 00:10:32.547 rmmod nvme_keyring 00:10:32.547 20:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:32.547 20:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:32.547 20:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:32.547 20:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 75324 ']' 00:10:32.547 20:26:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 75324 00:10:32.547 20:26:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 75324 ']' 00:10:32.547 20:26:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 75324 00:10:32.547 20:26:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:10:32.547 20:26:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:32.547 20:26:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75324 00:10:32.547 killing process with pid 75324 00:10:32.547 20:26:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:32.547 20:26:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:32.547 20:26:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75324' 00:10:32.547 20:26:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 75324 00:10:32.547 20:26:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 75324 00:10:32.804 20:26:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:32.804 20:26:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:32.804 20:26:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:32.804 20:26:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:32.804 20:26:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:32.804 20:26:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.805 20:26:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:32.805 20:26:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.805 20:26:54 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:32.805 00:10:32.805 real 0m20.494s 00:10:32.805 user 1m20.349s 00:10:32.805 sys 0m7.482s 00:10:32.805 ************************************ 00:10:32.805 END TEST nvmf_target_multipath 00:10:32.805 ************************************ 00:10:32.805 20:26:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:32.805 20:26:54 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:32.805 20:26:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:32.805 20:26:54 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:32.805 20:26:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:32.805 20:26:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.805 20:26:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:32.805 ************************************ 00:10:32.805 START TEST nvmf_zcopy 00:10:32.805 ************************************ 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:32.805 * Looking for test storage... 00:10:32.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:32.805 Cannot find device "nvmf_tgt_br" 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:32.805 Cannot find device "nvmf_tgt_br2" 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:32.805 Cannot find device "nvmf_tgt_br" 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:10:32.805 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:33.063 Cannot find device "nvmf_tgt_br2" 00:10:33.063 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:10:33.063 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:33.063 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:33.063 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:33.063 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:33.063 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:33.063 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:33.063 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:33.063 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:33.063 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:33.063 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:33.063 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:33.063 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:33.063 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:33.063 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:33.063 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:33.063 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:33.063 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:33.063 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:33.063 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:33.063 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:33.063 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:33.064 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:33.064 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:33.064 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:33.064 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:33.064 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:33.064 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:33.064 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:33.064 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:33.064 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:33.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:33.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:10:33.322 00:10:33.322 --- 10.0.0.2 ping statistics --- 00:10:33.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.322 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:33.322 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:33.322 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:10:33.322 00:10:33.322 --- 10.0.0.3 ping statistics --- 00:10:33.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.322 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:33.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:33.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:10:33.322 00:10:33.322 --- 10.0.0.1 ping statistics --- 00:10:33.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.322 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=75906 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 75906 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 75906 ']' 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:33.322 20:26:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:33.322 [2024-07-15 20:26:54.669399] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:10:33.322 [2024-07-15 20:26:54.669506] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.322 [2024-07-15 20:26:54.808594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.581 [2024-07-15 20:26:54.877971] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.581 [2024-07-15 20:26:54.878026] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.581 [2024-07-15 20:26:54.878039] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.581 [2024-07-15 20:26:54.878049] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.581 [2024-07-15 20:26:54.878058] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.581 [2024-07-15 20:26:54.878092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.517 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:34.517 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:10:34.517 20:26:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:34.517 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:34.517 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.517 20:26:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:34.517 20:26:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:34.517 20:26:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:34.517 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.517 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.517 [2024-07-15 20:26:55.700619] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:34.517 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.517 20:26:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:34.517 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.517 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.517 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.517 20:26:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:34.517 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.517 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.517 [2024-07-15 20:26:55.720761] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:34.517 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.518 malloc0 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:34.518 { 00:10:34.518 "params": { 00:10:34.518 "name": "Nvme$subsystem", 00:10:34.518 "trtype": "$TEST_TRANSPORT", 00:10:34.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:34.518 "adrfam": "ipv4", 00:10:34.518 "trsvcid": "$NVMF_PORT", 00:10:34.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:34.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:34.518 "hdgst": ${hdgst:-false}, 00:10:34.518 "ddgst": ${ddgst:-false} 00:10:34.518 }, 00:10:34.518 "method": "bdev_nvme_attach_controller" 00:10:34.518 } 00:10:34.518 EOF 00:10:34.518 )") 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:34.518 20:26:55 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:34.518 "params": { 00:10:34.518 "name": "Nvme1", 00:10:34.518 "trtype": "tcp", 00:10:34.518 "traddr": "10.0.0.2", 00:10:34.518 "adrfam": "ipv4", 00:10:34.518 "trsvcid": "4420", 00:10:34.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:34.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:34.518 "hdgst": false, 00:10:34.518 "ddgst": false 00:10:34.518 }, 00:10:34.518 "method": "bdev_nvme_attach_controller" 00:10:34.518 }' 00:10:34.518 [2024-07-15 20:26:55.819171] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:10:34.518 [2024-07-15 20:26:55.819450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75959 ] 00:10:34.518 [2024-07-15 20:26:55.957538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.777 [2024-07-15 20:26:56.034065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.777 Running I/O for 10 seconds... 00:10:44.742 00:10:44.742 Latency(us) 00:10:44.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:44.742 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:44.742 Verification LBA range: start 0x0 length 0x1000 00:10:44.742 Nvme1n1 : 10.01 5362.44 41.89 0.00 0.00 23792.83 439.39 31457.28 00:10:44.742 =================================================================================================================== 00:10:44.742 Total : 5362.44 41.89 0.00 0.00 23792.83 439.39 31457.28 00:10:45.000 20:27:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=76081 00:10:45.000 20:27:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:45.000 20:27:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.000 20:27:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:45.000 20:27:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:45.000 20:27:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:45.000 20:27:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:45.000 20:27:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:45.000 20:27:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:45.000 { 00:10:45.000 "params": { 00:10:45.000 "name": "Nvme$subsystem", 00:10:45.000 "trtype": "$TEST_TRANSPORT", 00:10:45.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:45.000 "adrfam": "ipv4", 00:10:45.000 "trsvcid": "$NVMF_PORT", 00:10:45.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:45.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:45.000 "hdgst": ${hdgst:-false}, 00:10:45.000 "ddgst": ${ddgst:-false} 00:10:45.000 }, 00:10:45.000 "method": "bdev_nvme_attach_controller" 00:10:45.000 } 00:10:45.000 EOF 00:10:45.000 )") 00:10:45.000 20:27:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:45.000 [2024-07-15 20:27:06.364332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.000 [2024-07-15 20:27:06.364402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.000 20:27:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:45.000 20:27:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:45.000 20:27:06 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:45.000 "params": { 00:10:45.000 "name": "Nvme1", 00:10:45.000 "trtype": "tcp", 00:10:45.000 "traddr": "10.0.0.2", 00:10:45.000 "adrfam": "ipv4", 00:10:45.000 "trsvcid": "4420", 00:10:45.000 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:45.000 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:45.000 "hdgst": false, 00:10:45.000 "ddgst": false 00:10:45.000 }, 00:10:45.000 "method": "bdev_nvme_attach_controller" 00:10:45.000 }' 00:10:45.000 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.000 [2024-07-15 20:27:06.376326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.000 [2024-07-15 20:27:06.376393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.000 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.000 [2024-07-15 20:27:06.388274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.000 [2024-07-15 20:27:06.388321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.000 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.000 [2024-07-15 20:27:06.400269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.000 [2024-07-15 20:27:06.400316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.000 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.000 [2024-07-15 20:27:06.406679] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:10:45.000 [2024-07-15 20:27:06.406758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76081 ] 00:10:45.000 [2024-07-15 20:27:06.408240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.000 [2024-07-15 20:27:06.408269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.000 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.000 [2024-07-15 20:27:06.420267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.000 [2024-07-15 20:27:06.420306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.000 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.000 [2024-07-15 20:27:06.432306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.000 [2024-07-15 20:27:06.432363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.000 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.000 [2024-07-15 20:27:06.444292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.000 [2024-07-15 20:27:06.444348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.000 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.000 [2024-07-15 20:27:06.456288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.000 [2024-07-15 20:27:06.456338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.000 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.000 [2024-07-15 20:27:06.468295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.000 [2024-07-15 20:27:06.468350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.000 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.000 [2024-07-15 20:27:06.480298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.000 [2024-07-15 20:27:06.480346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.000 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.000 [2024-07-15 20:27:06.492298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.000 [2024-07-15 20:27:06.492351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.000 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.259 [2024-07-15 20:27:06.504305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.259 [2024-07-15 20:27:06.504353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.259 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.259 [2024-07-15 20:27:06.516342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.259 [2024-07-15 20:27:06.516398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.259 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.259 [2024-07-15 20:27:06.528318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.259 [2024-07-15 20:27:06.528365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.259 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.259 [2024-07-15 20:27:06.540313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.259 [2024-07-15 20:27:06.540356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.259 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.259 [2024-07-15 20:27:06.546549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.259 [2024-07-15 20:27:06.552338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.259 [2024-07-15 20:27:06.552393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.259 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.259 [2024-07-15 20:27:06.560317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.259 [2024-07-15 20:27:06.560365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.259 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.259 [2024-07-15 20:27:06.572371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.259 [2024-07-15 20:27:06.572443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.259 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.259 [2024-07-15 20:27:06.584351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.259 [2024-07-15 20:27:06.584408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.259 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.259 [2024-07-15 20:27:06.596338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.259 [2024-07-15 20:27:06.596386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.259 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.259 [2024-07-15 20:27:06.608327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.260 [2024-07-15 20:27:06.608370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.260 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.260 [2024-07-15 20:27:06.620338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.260 [2024-07-15 20:27:06.620386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.260 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.260 [2024-07-15 20:27:06.632324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.260 [2024-07-15 20:27:06.632361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.260 [2024-07-15 20:27:06.635394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.260 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.260 [2024-07-15 20:27:06.644325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.260 [2024-07-15 20:27:06.644369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.260 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.260 [2024-07-15 20:27:06.656362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.260 [2024-07-15 20:27:06.656416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.260 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.260 [2024-07-15 20:27:06.668349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.260 [2024-07-15 20:27:06.668395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.260 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.260 [2024-07-15 20:27:06.680352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.260 [2024-07-15 20:27:06.680402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.260 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.260 [2024-07-15 20:27:06.692349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.260 [2024-07-15 20:27:06.692392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.260 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.260 [2024-07-15 20:27:06.704341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.260 [2024-07-15 20:27:06.704380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.260 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.260 [2024-07-15 20:27:06.716398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.260 [2024-07-15 20:27:06.716453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.260 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.260 [2024-07-15 20:27:06.728369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.260 [2024-07-15 20:27:06.728411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.260 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.260 [2024-07-15 20:27:06.740370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.260 [2024-07-15 20:27:06.740413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.260 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.260 [2024-07-15 20:27:06.752386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.260 [2024-07-15 20:27:06.752436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.260 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.518 [2024-07-15 20:27:06.764408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.518 [2024-07-15 20:27:06.764457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.518 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.518 [2024-07-15 20:27:06.776400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.518 [2024-07-15 20:27:06.776447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.518 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.518 Running I/O for 5 seconds... 00:10:45.518 [2024-07-15 20:27:06.788400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.518 [2024-07-15 20:27:06.788442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.519 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.519 [2024-07-15 20:27:06.805538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.519 [2024-07-15 20:27:06.805597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.519 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.519 [2024-07-15 20:27:06.821542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.519 [2024-07-15 20:27:06.821597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.519 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.519 [2024-07-15 20:27:06.837296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.519 [2024-07-15 20:27:06.837349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.519 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.519 [2024-07-15 20:27:06.855856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.519 [2024-07-15 20:27:06.855935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.519 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.519 [2024-07-15 20:27:06.872129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.519 [2024-07-15 20:27:06.872223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.519 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.519 [2024-07-15 20:27:06.888857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.519 [2024-07-15 20:27:06.888947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.519 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.519 [2024-07-15 20:27:06.905039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.519 [2024-07-15 20:27:06.905098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.519 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.519 [2024-07-15 20:27:06.922992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.519 [2024-07-15 20:27:06.923074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.519 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.519 [2024-07-15 20:27:06.937838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.519 [2024-07-15 20:27:06.937908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.519 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.519 [2024-07-15 20:27:06.948241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.519 [2024-07-15 20:27:06.948286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.519 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.519 [2024-07-15 20:27:06.963089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.519 [2024-07-15 20:27:06.963137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.519 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.519 [2024-07-15 20:27:06.973032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.519 [2024-07-15 20:27:06.973086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.519 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.519 [2024-07-15 20:27:06.988277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.519 [2024-07-15 20:27:06.988331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.519 2024/07/15 20:27:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.519 [2024-07-15 20:27:06.998881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.519 [2024-07-15 20:27:06.998929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.519 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.519 [2024-07-15 20:27:07.014109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.519 [2024-07-15 20:27:07.014163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.778 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.778 [2024-07-15 20:27:07.031723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.778 [2024-07-15 20:27:07.031796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.778 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.778 [2024-07-15 20:27:07.047678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.778 [2024-07-15 20:27:07.047736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.778 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.778 [2024-07-15 20:27:07.066428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.778 [2024-07-15 20:27:07.066494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.778 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.778 [2024-07-15 20:27:07.081053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.778 [2024-07-15 20:27:07.081114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.778 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.778 [2024-07-15 20:27:07.097363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.778 [2024-07-15 20:27:07.097424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.778 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.778 [2024-07-15 20:27:07.114286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.778 [2024-07-15 20:27:07.114347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.778 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.778 [2024-07-15 20:27:07.130695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.778 [2024-07-15 20:27:07.130758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.778 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.778 [2024-07-15 20:27:07.143920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.778 [2024-07-15 20:27:07.143967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.778 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.778 [2024-07-15 20:27:07.152974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.778 [2024-07-15 20:27:07.153019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.778 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.778 [2024-07-15 20:27:07.167518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.778 [2024-07-15 20:27:07.167569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.778 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.778 [2024-07-15 20:27:07.176769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.778 [2024-07-15 20:27:07.176816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.778 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.778 [2024-07-15 20:27:07.191761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.778 [2024-07-15 20:27:07.191816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.778 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.778 [2024-07-15 20:27:07.201950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.778 [2024-07-15 20:27:07.202005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.778 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.778 [2024-07-15 20:27:07.217364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.778 [2024-07-15 20:27:07.217409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.778 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.778 [2024-07-15 20:27:07.233144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.778 [2024-07-15 20:27:07.233195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.778 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.778 [2024-07-15 20:27:07.244074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.778 [2024-07-15 20:27:07.244125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.778 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.778 [2024-07-15 20:27:07.258650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.778 [2024-07-15 20:27:07.258699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.778 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:45.778 [2024-07-15 20:27:07.275163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:45.778 [2024-07-15 20:27:07.275219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.037 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.037 [2024-07-15 20:27:07.290918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.037 [2024-07-15 20:27:07.290967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.037 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.037 [2024-07-15 20:27:07.301473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.037 [2024-07-15 20:27:07.301522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.037 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.037 [2024-07-15 20:27:07.316505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.037 [2024-07-15 20:27:07.316559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.037 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.037 [2024-07-15 20:27:07.333198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.037 [2024-07-15 20:27:07.333255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.037 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.037 [2024-07-15 20:27:07.349369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.037 [2024-07-15 20:27:07.349428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.037 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.037 [2024-07-15 20:27:07.364648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.037 [2024-07-15 20:27:07.364699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.037 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.037 [2024-07-15 20:27:07.374101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.037 [2024-07-15 20:27:07.374146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.037 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.037 [2024-07-15 20:27:07.389092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.037 [2024-07-15 20:27:07.389141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.037 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.037 [2024-07-15 20:27:07.404395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.037 [2024-07-15 20:27:07.404448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.037 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.037 [2024-07-15 20:27:07.421576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.037 [2024-07-15 20:27:07.421633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.037 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.037 [2024-07-15 20:27:07.438079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.037 [2024-07-15 20:27:07.438127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.037 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.037 [2024-07-15 20:27:07.449017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.037 [2024-07-15 20:27:07.449059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.037 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.037 [2024-07-15 20:27:07.460129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.037 [2024-07-15 20:27:07.460171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.037 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.037 [2024-07-15 20:27:07.473222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.037 [2024-07-15 20:27:07.473263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.037 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.037 [2024-07-15 20:27:07.483912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.037 [2024-07-15 20:27:07.483952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.037 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.037 [2024-07-15 20:27:07.498784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.037 [2024-07-15 20:27:07.498844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.038 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.038 [2024-07-15 20:27:07.509365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.038 [2024-07-15 20:27:07.509406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.038 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.038 [2024-07-15 20:27:07.524082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.038 [2024-07-15 20:27:07.524125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.038 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.038 [2024-07-15 20:27:07.534606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.038 [2024-07-15 20:27:07.534650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.297 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.297 [2024-07-15 20:27:07.549531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.297 [2024-07-15 20:27:07.549577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.297 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.297 [2024-07-15 20:27:07.560092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.297 [2024-07-15 20:27:07.560133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.297 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.297 [2024-07-15 20:27:07.575128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.297 [2024-07-15 20:27:07.575173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.297 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.297 [2024-07-15 20:27:07.592976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.297 [2024-07-15 20:27:07.593039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.297 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.297 [2024-07-15 20:27:07.608404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.297 [2024-07-15 20:27:07.608449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.297 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.297 [2024-07-15 20:27:07.620900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.297 [2024-07-15 20:27:07.620940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.297 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.297 [2024-07-15 20:27:07.637760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.297 [2024-07-15 20:27:07.637815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.297 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.297 [2024-07-15 20:27:07.648623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.297 [2024-07-15 20:27:07.648678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.297 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.297 [2024-07-15 20:27:07.659689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.297 [2024-07-15 20:27:07.659728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.297 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.297 [2024-07-15 20:27:07.670818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.297 [2024-07-15 20:27:07.670861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.297 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.297 [2024-07-15 20:27:07.682057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.297 [2024-07-15 20:27:07.682093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.297 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.297 [2024-07-15 20:27:07.693381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.297 [2024-07-15 20:27:07.693419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.297 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.297 [2024-07-15 20:27:07.709952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.297 [2024-07-15 20:27:07.709998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.297 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.297 [2024-07-15 20:27:07.726506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.297 [2024-07-15 20:27:07.726553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.297 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.297 [2024-07-15 20:27:07.742714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.297 [2024-07-15 20:27:07.742769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.297 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.297 [2024-07-15 20:27:07.753746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.297 [2024-07-15 20:27:07.753790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.297 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.297 [2024-07-15 20:27:07.768497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.297 [2024-07-15 20:27:07.768539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.297 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.297 [2024-07-15 20:27:07.779000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.297 [2024-07-15 20:27:07.779039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.297 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.297 [2024-07-15 20:27:07.796038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.297 [2024-07-15 20:27:07.796098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.591 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.591 [2024-07-15 20:27:07.811442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.591 [2024-07-15 20:27:07.811497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.591 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.591 [2024-07-15 20:27:07.821315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.591 [2024-07-15 20:27:07.821376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.591 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.591 [2024-07-15 20:27:07.833553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.591 [2024-07-15 20:27:07.833625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.591 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.591 [2024-07-15 20:27:07.848893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.591 [2024-07-15 20:27:07.848942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.591 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.591 [2024-07-15 20:27:07.859495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.591 [2024-07-15 20:27:07.859546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.591 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.591 [2024-07-15 20:27:07.870784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.591 [2024-07-15 20:27:07.870839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.591 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.592 [2024-07-15 20:27:07.884892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.592 [2024-07-15 20:27:07.884975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.592 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.592 [2024-07-15 20:27:07.903838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.592 [2024-07-15 20:27:07.903912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.592 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.592 [2024-07-15 20:27:07.915797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.592 [2024-07-15 20:27:07.915885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.592 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.592 [2024-07-15 20:27:07.932645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.592 [2024-07-15 20:27:07.932713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.592 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.592 [2024-07-15 20:27:07.949613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.592 [2024-07-15 20:27:07.949664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.592 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.592 [2024-07-15 20:27:07.960330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.592 [2024-07-15 20:27:07.960375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.592 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.592 [2024-07-15 20:27:07.975081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.592 [2024-07-15 20:27:07.975123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.592 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.592 [2024-07-15 20:27:07.992239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.592 [2024-07-15 20:27:07.992288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.592 2024/07/15 20:27:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.592 [2024-07-15 20:27:08.007753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.592 [2024-07-15 20:27:08.007803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.592 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.592 [2024-07-15 20:27:08.018415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.592 [2024-07-15 20:27:08.018461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.592 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.592 [2024-07-15 20:27:08.033122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.592 [2024-07-15 20:27:08.033169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.592 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.592 [2024-07-15 20:27:08.043716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.592 [2024-07-15 20:27:08.043765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.592 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.592 [2024-07-15 20:27:08.059661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.592 [2024-07-15 20:27:08.059736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.592 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.072120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.072173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.084035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.084085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.099285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.099341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.116293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.116348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.126991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.127042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.143032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.143087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.158788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.158850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.174801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.174858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.185804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.185853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.200333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.200385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.211015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.211064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.225431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.225483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.241061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.241113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.253358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.253409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.263027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.263071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.275079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.275125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.289961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.290013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.300210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.300262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.314622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.314675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.325424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.325476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.336050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.336099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:46.854 [2024-07-15 20:27:08.347259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:46.854 [2024-07-15 20:27:08.347304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:46.854 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.113 [2024-07-15 20:27:08.361976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.113 [2024-07-15 20:27:08.362028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.113 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.113 [2024-07-15 20:27:08.372598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.113 [2024-07-15 20:27:08.372657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.113 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.113 [2024-07-15 20:27:08.387625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.113 [2024-07-15 20:27:08.387674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.113 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.113 [2024-07-15 20:27:08.397595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.113 [2024-07-15 20:27:08.397642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.113 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.113 [2024-07-15 20:27:08.409061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.114 [2024-07-15 20:27:08.409106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.114 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.114 [2024-07-15 20:27:08.424685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.114 [2024-07-15 20:27:08.424740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.114 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.114 [2024-07-15 20:27:08.443777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.114 [2024-07-15 20:27:08.443830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.114 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.114 [2024-07-15 20:27:08.454548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.114 [2024-07-15 20:27:08.454594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.114 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.114 [2024-07-15 20:27:08.467422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.114 [2024-07-15 20:27:08.467478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.114 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.114 [2024-07-15 20:27:08.482595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.114 [2024-07-15 20:27:08.482653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.114 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.114 [2024-07-15 20:27:08.493261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.114 [2024-07-15 20:27:08.493311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.114 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.114 [2024-07-15 20:27:08.508303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.114 [2024-07-15 20:27:08.508357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.114 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.114 [2024-07-15 20:27:08.524735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.114 [2024-07-15 20:27:08.524791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.114 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.114 [2024-07-15 20:27:08.542277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.114 [2024-07-15 20:27:08.542333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.114 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.114 [2024-07-15 20:27:08.558409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.114 [2024-07-15 20:27:08.558467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.114 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.114 [2024-07-15 20:27:08.575669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.114 [2024-07-15 20:27:08.575725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.114 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.114 [2024-07-15 20:27:08.591853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.114 [2024-07-15 20:27:08.591929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.114 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.114 [2024-07-15 20:27:08.602922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.114 [2024-07-15 20:27:08.602968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.114 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.374 [2024-07-15 20:27:08.617520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.374 [2024-07-15 20:27:08.617576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.374 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.374 [2024-07-15 20:27:08.628005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.374 [2024-07-15 20:27:08.628056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.374 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.374 [2024-07-15 20:27:08.643213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.374 [2024-07-15 20:27:08.643271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.374 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.374 [2024-07-15 20:27:08.658848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.374 [2024-07-15 20:27:08.658918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.374 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.374 [2024-07-15 20:27:08.669088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.374 [2024-07-15 20:27:08.669137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.374 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.374 [2024-07-15 20:27:08.684144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.374 [2024-07-15 20:27:08.684201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.374 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.374 [2024-07-15 20:27:08.700446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.374 [2024-07-15 20:27:08.700502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.374 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.374 [2024-07-15 20:27:08.716435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.374 [2024-07-15 20:27:08.716494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.374 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.374 [2024-07-15 20:27:08.733262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.374 [2024-07-15 20:27:08.733323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.374 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.374 [2024-07-15 20:27:08.749580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.374 [2024-07-15 20:27:08.749644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.374 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.374 [2024-07-15 20:27:08.765368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.374 [2024-07-15 20:27:08.765428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.374 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.374 [2024-07-15 20:27:08.776171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.374 [2024-07-15 20:27:08.776226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.374 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.374 [2024-07-15 20:27:08.790979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.374 [2024-07-15 20:27:08.791029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.374 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.374 [2024-07-15 20:27:08.801376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.374 [2024-07-15 20:27:08.801416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.374 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.374 [2024-07-15 20:27:08.811734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.374 [2024-07-15 20:27:08.811772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.374 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.374 [2024-07-15 20:27:08.822955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.374 [2024-07-15 20:27:08.822994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.374 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.374 [2024-07-15 20:27:08.836238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.374 [2024-07-15 20:27:08.836280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.374 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.374 [2024-07-15 20:27:08.853758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.374 [2024-07-15 20:27:08.853821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.374 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.374 [2024-07-15 20:27:08.868907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.374 [2024-07-15 20:27:08.868958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.374 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.632 [2024-07-15 20:27:08.879737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.632 [2024-07-15 20:27:08.879775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.633 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.633 [2024-07-15 20:27:08.894826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.633 [2024-07-15 20:27:08.894885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.633 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.633 [2024-07-15 20:27:08.911824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.633 [2024-07-15 20:27:08.911907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.633 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.633 [2024-07-15 20:27:08.929344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.633 [2024-07-15 20:27:08.929405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.633 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.633 [2024-07-15 20:27:08.946114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.633 [2024-07-15 20:27:08.946188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.633 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.633 [2024-07-15 20:27:08.962997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.633 [2024-07-15 20:27:08.963055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.633 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.633 [2024-07-15 20:27:08.972659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.633 [2024-07-15 20:27:08.972710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.633 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.633 [2024-07-15 20:27:08.983968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.633 [2024-07-15 20:27:08.984014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.633 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.633 [2024-07-15 20:27:08.994627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.633 [2024-07-15 20:27:08.994667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.633 2024/07/15 20:27:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.633 [2024-07-15 20:27:09.005817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.633 [2024-07-15 20:27:09.005859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.633 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.633 [2024-07-15 20:27:09.018468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.633 [2024-07-15 20:27:09.018513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.633 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.633 [2024-07-15 20:27:09.028573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.633 [2024-07-15 20:27:09.028620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.633 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.633 [2024-07-15 20:27:09.039460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.633 [2024-07-15 20:27:09.039503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.633 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.633 [2024-07-15 20:27:09.057120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.633 [2024-07-15 20:27:09.057167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.633 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.633 [2024-07-15 20:27:09.072704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.633 [2024-07-15 20:27:09.072748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.633 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.633 [2024-07-15 20:27:09.084105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.633 [2024-07-15 20:27:09.084148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.633 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.633 [2024-07-15 20:27:09.095061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.633 [2024-07-15 20:27:09.095105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.633 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.633 [2024-07-15 20:27:09.110914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.633 [2024-07-15 20:27:09.110960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.633 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.633 [2024-07-15 20:27:09.127326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.633 [2024-07-15 20:27:09.127371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.633 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.892 [2024-07-15 20:27:09.144375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.892 [2024-07-15 20:27:09.144422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.892 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.892 [2024-07-15 20:27:09.160862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.892 [2024-07-15 20:27:09.160926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.892 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.892 [2024-07-15 20:27:09.178646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.892 [2024-07-15 20:27:09.178703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.892 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.892 [2024-07-15 20:27:09.190327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.892 [2024-07-15 20:27:09.190379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.892 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.892 [2024-07-15 20:27:09.203962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.892 [2024-07-15 20:27:09.204013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.892 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.892 [2024-07-15 20:27:09.215633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.892 [2024-07-15 20:27:09.215688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.892 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.892 [2024-07-15 20:27:09.228227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.892 [2024-07-15 20:27:09.228304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.892 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.892 [2024-07-15 20:27:09.243985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.892 [2024-07-15 20:27:09.244059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.892 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.892 [2024-07-15 20:27:09.261423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.892 [2024-07-15 20:27:09.261496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.892 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.892 [2024-07-15 20:27:09.277964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.892 [2024-07-15 20:27:09.278026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.892 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.892 [2024-07-15 20:27:09.294906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.892 [2024-07-15 20:27:09.294971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.892 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.892 [2024-07-15 20:27:09.312056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.892 [2024-07-15 20:27:09.312126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.892 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.892 [2024-07-15 20:27:09.328098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.892 [2024-07-15 20:27:09.328159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.892 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.892 [2024-07-15 20:27:09.338550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.892 [2024-07-15 20:27:09.338631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.892 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.892 [2024-07-15 20:27:09.351652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.892 [2024-07-15 20:27:09.351711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.892 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.892 [2024-07-15 20:27:09.366757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.892 [2024-07-15 20:27:09.366820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.892 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:47.892 [2024-07-15 20:27:09.383251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:47.892 [2024-07-15 20:27:09.383309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:47.892 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.150 [2024-07-15 20:27:09.393911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.150 [2024-07-15 20:27:09.393960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.150 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.150 [2024-07-15 20:27:09.409336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.150 [2024-07-15 20:27:09.409390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.150 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.150 [2024-07-15 20:27:09.425329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.150 [2024-07-15 20:27:09.425386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.150 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.150 [2024-07-15 20:27:09.444116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.150 [2024-07-15 20:27:09.444185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.150 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.150 [2024-07-15 20:27:09.461437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.150 [2024-07-15 20:27:09.461485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.150 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.150 [2024-07-15 20:27:09.472907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.150 [2024-07-15 20:27:09.472954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.150 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.150 [2024-07-15 20:27:09.484383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.150 [2024-07-15 20:27:09.484435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.150 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.150 [2024-07-15 20:27:09.497627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.150 [2024-07-15 20:27:09.497676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.150 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.150 [2024-07-15 20:27:09.509811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.150 [2024-07-15 20:27:09.509855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.150 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.150 [2024-07-15 20:27:09.525106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.150 [2024-07-15 20:27:09.525167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.150 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.150 [2024-07-15 20:27:09.536648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.150 [2024-07-15 20:27:09.536692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.150 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.150 [2024-07-15 20:27:09.548795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.150 [2024-07-15 20:27:09.548847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.150 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.150 [2024-07-15 20:27:09.560616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.150 [2024-07-15 20:27:09.560674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.150 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.150 [2024-07-15 20:27:09.578147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.150 [2024-07-15 20:27:09.578207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.151 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.151 [2024-07-15 20:27:09.589222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.151 [2024-07-15 20:27:09.589283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.151 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.151 [2024-07-15 20:27:09.602232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.151 [2024-07-15 20:27:09.602281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.151 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.151 [2024-07-15 20:27:09.614098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.151 [2024-07-15 20:27:09.614145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.151 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.151 [2024-07-15 20:27:09.631236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.151 [2024-07-15 20:27:09.631291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.151 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.151 [2024-07-15 20:27:09.642498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.151 [2024-07-15 20:27:09.642545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.151 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.409 [2024-07-15 20:27:09.654116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.409 [2024-07-15 20:27:09.654170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.409 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.409 [2024-07-15 20:27:09.667031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.409 [2024-07-15 20:27:09.667080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.409 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.409 [2024-07-15 20:27:09.684235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.409 [2024-07-15 20:27:09.684290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.410 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.410 [2024-07-15 20:27:09.700753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.410 [2024-07-15 20:27:09.700803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.410 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.410 [2024-07-15 20:27:09.718394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.410 [2024-07-15 20:27:09.718468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.410 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.410 [2024-07-15 20:27:09.734451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.410 [2024-07-15 20:27:09.734507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.410 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.410 [2024-07-15 20:27:09.744427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.410 [2024-07-15 20:27:09.744478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.410 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.410 [2024-07-15 20:27:09.757187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.410 [2024-07-15 20:27:09.757237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.410 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.410 [2024-07-15 20:27:09.772463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.410 [2024-07-15 20:27:09.772513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.410 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.410 [2024-07-15 20:27:09.783403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.410 [2024-07-15 20:27:09.783468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.410 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.410 [2024-07-15 20:27:09.795309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.410 [2024-07-15 20:27:09.795355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.410 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.410 [2024-07-15 20:27:09.807321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.410 [2024-07-15 20:27:09.807367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.410 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.410 [2024-07-15 20:27:09.823515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.410 [2024-07-15 20:27:09.823577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.410 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.410 [2024-07-15 20:27:09.834760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.410 [2024-07-15 20:27:09.834811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.410 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.410 [2024-07-15 20:27:09.850289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.410 [2024-07-15 20:27:09.850356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.410 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.410 [2024-07-15 20:27:09.861476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.410 [2024-07-15 20:27:09.861527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.410 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.410 [2024-07-15 20:27:09.878490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.410 [2024-07-15 20:27:09.878556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.410 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.410 [2024-07-15 20:27:09.895683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.410 [2024-07-15 20:27:09.895738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.410 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.669 [2024-07-15 20:27:09.911768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.669 [2024-07-15 20:27:09.911839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.669 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.669 [2024-07-15 20:27:09.922634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.669 [2024-07-15 20:27:09.922692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.669 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.669 [2024-07-15 20:27:09.937539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.669 [2024-07-15 20:27:09.937600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.669 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.669 [2024-07-15 20:27:09.952444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.669 [2024-07-15 20:27:09.952498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.669 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.669 [2024-07-15 20:27:09.969199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.669 [2024-07-15 20:27:09.969261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.669 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.669 [2024-07-15 20:27:09.979853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.669 [2024-07-15 20:27:09.979922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.669 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.669 [2024-07-15 20:27:09.993516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.669 [2024-07-15 20:27:09.993574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.669 2024/07/15 20:27:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.669 [2024-07-15 20:27:10.009385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.669 [2024-07-15 20:27:10.009440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.669 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.669 [2024-07-15 20:27:10.026120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.669 [2024-07-15 20:27:10.026180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.669 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.669 [2024-07-15 20:27:10.042677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.669 [2024-07-15 20:27:10.042743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.669 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.669 [2024-07-15 20:27:10.058972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.669 [2024-07-15 20:27:10.059026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.669 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.669 [2024-07-15 20:27:10.069838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.669 [2024-07-15 20:27:10.069905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.669 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.669 [2024-07-15 20:27:10.085755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.669 [2024-07-15 20:27:10.085811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.669 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.669 [2024-07-15 20:27:10.097033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.669 [2024-07-15 20:27:10.097085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.669 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.669 [2024-07-15 20:27:10.108307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.669 [2024-07-15 20:27:10.108353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.669 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.669 [2024-07-15 20:27:10.123177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.669 [2024-07-15 20:27:10.123228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.669 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.669 [2024-07-15 20:27:10.139969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.669 [2024-07-15 20:27:10.140023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.669 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.669 [2024-07-15 20:27:10.151013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.669 [2024-07-15 20:27:10.151055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.669 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.669 [2024-07-15 20:27:10.161905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.669 [2024-07-15 20:27:10.161944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.669 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.929 [2024-07-15 20:27:10.177771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.929 [2024-07-15 20:27:10.177812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.929 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.929 [2024-07-15 20:27:10.194751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.929 [2024-07-15 20:27:10.194796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.929 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.929 [2024-07-15 20:27:10.205725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.929 [2024-07-15 20:27:10.205769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.929 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.929 [2024-07-15 20:27:10.220391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.929 [2024-07-15 20:27:10.220435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.929 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.929 [2024-07-15 20:27:10.231297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.929 [2024-07-15 20:27:10.231339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.929 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.929 [2024-07-15 20:27:10.246300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.929 [2024-07-15 20:27:10.246347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.929 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.929 [2024-07-15 20:27:10.256784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.929 [2024-07-15 20:27:10.256821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.929 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.929 [2024-07-15 20:27:10.272684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.929 [2024-07-15 20:27:10.272730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.929 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.929 [2024-07-15 20:27:10.288461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.929 [2024-07-15 20:27:10.288512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.929 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.929 [2024-07-15 20:27:10.298967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.929 [2024-07-15 20:27:10.299004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.929 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.929 [2024-07-15 20:27:10.314142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.929 [2024-07-15 20:27:10.314183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.929 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.929 [2024-07-15 20:27:10.330946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.929 [2024-07-15 20:27:10.330991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.929 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.929 [2024-07-15 20:27:10.347437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.929 [2024-07-15 20:27:10.347480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.929 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.929 [2024-07-15 20:27:10.363991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.929 [2024-07-15 20:27:10.364034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.929 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.929 [2024-07-15 20:27:10.379450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.929 [2024-07-15 20:27:10.379499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.929 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.929 [2024-07-15 20:27:10.395538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.929 [2024-07-15 20:27:10.395593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.929 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.929 [2024-07-15 20:27:10.412652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.929 [2024-07-15 20:27:10.412697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:48.929 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:48.929 [2024-07-15 20:27:10.428160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:48.929 [2024-07-15 20:27:10.428201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.188 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.188 [2024-07-15 20:27:10.439144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.188 [2024-07-15 20:27:10.439183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.188 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.188 [2024-07-15 20:27:10.454471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.188 [2024-07-15 20:27:10.454513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.188 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.188 [2024-07-15 20:27:10.471393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.188 [2024-07-15 20:27:10.471437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.188 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.188 [2024-07-15 20:27:10.488498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.188 [2024-07-15 20:27:10.488545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.188 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.188 [2024-07-15 20:27:10.504552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.188 [2024-07-15 20:27:10.504595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.188 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.188 [2024-07-15 20:27:10.520646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.188 [2024-07-15 20:27:10.520688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.188 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.188 [2024-07-15 20:27:10.530936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.188 [2024-07-15 20:27:10.530970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.188 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.188 [2024-07-15 20:27:10.541712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.188 [2024-07-15 20:27:10.541750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.188 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.188 [2024-07-15 20:27:10.552792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.188 [2024-07-15 20:27:10.552836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.188 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.188 [2024-07-15 20:27:10.567893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.188 [2024-07-15 20:27:10.567937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.188 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.188 [2024-07-15 20:27:10.584712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.188 [2024-07-15 20:27:10.584749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.188 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.188 [2024-07-15 20:27:10.600902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.188 [2024-07-15 20:27:10.600942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.188 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.188 [2024-07-15 20:27:10.617499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.188 [2024-07-15 20:27:10.617543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.188 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.188 [2024-07-15 20:27:10.627900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.188 [2024-07-15 20:27:10.627943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.188 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.188 [2024-07-15 20:27:10.643082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.188 [2024-07-15 20:27:10.643138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.188 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.188 [2024-07-15 20:27:10.653744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.188 [2024-07-15 20:27:10.653788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.188 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.188 [2024-07-15 20:27:10.664889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.188 [2024-07-15 20:27:10.664928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.188 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.188 [2024-07-15 20:27:10.680181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.188 [2024-07-15 20:27:10.680224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.188 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.447 [2024-07-15 20:27:10.697793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.447 [2024-07-15 20:27:10.697841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.447 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.447 [2024-07-15 20:27:10.713080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.447 [2024-07-15 20:27:10.713125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.447 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.447 [2024-07-15 20:27:10.723541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.447 [2024-07-15 20:27:10.723583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.447 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.447 [2024-07-15 20:27:10.738318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.447 [2024-07-15 20:27:10.738364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.447 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.447 [2024-07-15 20:27:10.748889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.447 [2024-07-15 20:27:10.748928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.447 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.447 [2024-07-15 20:27:10.763698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.447 [2024-07-15 20:27:10.763748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.447 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.447 [2024-07-15 20:27:10.773817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.447 [2024-07-15 20:27:10.773885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.447 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.447 [2024-07-15 20:27:10.784450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.447 [2024-07-15 20:27:10.784505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.447 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.447 [2024-07-15 20:27:10.797191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.447 [2024-07-15 20:27:10.797248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.447 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.447 [2024-07-15 20:27:10.806765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.447 [2024-07-15 20:27:10.806804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.447 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.447 [2024-07-15 20:27:10.819973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.447 [2024-07-15 20:27:10.820033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.447 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.447 [2024-07-15 20:27:10.835791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.447 [2024-07-15 20:27:10.835835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.447 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.447 [2024-07-15 20:27:10.850699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.447 [2024-07-15 20:27:10.850741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.447 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.447 [2024-07-15 20:27:10.868245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.447 [2024-07-15 20:27:10.868289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.447 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.447 [2024-07-15 20:27:10.883025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.447 [2024-07-15 20:27:10.883074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.447 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.447 [2024-07-15 20:27:10.895479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.447 [2024-07-15 20:27:10.895526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.447 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.447 [2024-07-15 20:27:10.913146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.447 [2024-07-15 20:27:10.913193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.447 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.447 [2024-07-15 20:27:10.928780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.447 [2024-07-15 20:27:10.928830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.447 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.447 [2024-07-15 20:27:10.945113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.447 [2024-07-15 20:27:10.945180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.705 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.705 [2024-07-15 20:27:10.962160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.705 [2024-07-15 20:27:10.962212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.705 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.705 [2024-07-15 20:27:10.978073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.705 [2024-07-15 20:27:10.978119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.705 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.705 [2024-07-15 20:27:10.988463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.705 [2024-07-15 20:27:10.988506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.705 2024/07/15 20:27:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.705 [2024-07-15 20:27:11.003658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.705 [2024-07-15 20:27:11.003723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.705 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.705 [2024-07-15 20:27:11.019953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.705 [2024-07-15 20:27:11.020002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.705 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.705 [2024-07-15 20:27:11.035641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.705 [2024-07-15 20:27:11.035699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.705 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.705 [2024-07-15 20:27:11.051483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.705 [2024-07-15 20:27:11.051532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.705 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.705 [2024-07-15 20:27:11.062060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.705 [2024-07-15 20:27:11.062105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.705 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.705 [2024-07-15 20:27:11.077607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.705 [2024-07-15 20:27:11.077663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.706 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.706 [2024-07-15 20:27:11.094600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.706 [2024-07-15 20:27:11.094665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.706 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.706 [2024-07-15 20:27:11.110451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.706 [2024-07-15 20:27:11.110501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.706 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.706 [2024-07-15 20:27:11.121340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.706 [2024-07-15 20:27:11.121383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.706 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.706 [2024-07-15 20:27:11.136207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.706 [2024-07-15 20:27:11.136257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.706 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.706 [2024-07-15 20:27:11.153198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.706 [2024-07-15 20:27:11.153244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.706 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.706 [2024-07-15 20:27:11.168863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.706 [2024-07-15 20:27:11.168929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.706 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.706 [2024-07-15 20:27:11.186242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.706 [2024-07-15 20:27:11.186312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.706 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.706 [2024-07-15 20:27:11.202422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.706 [2024-07-15 20:27:11.202476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.963 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.963 [2024-07-15 20:27:11.213011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.963 [2024-07-15 20:27:11.213056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.963 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.963 [2024-07-15 20:27:11.224237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.963 [2024-07-15 20:27:11.224283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.963 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.963 [2024-07-15 20:27:11.241155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.963 [2024-07-15 20:27:11.241222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.963 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.963 [2024-07-15 20:27:11.257935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.963 [2024-07-15 20:27:11.257992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.963 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.963 [2024-07-15 20:27:11.274340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.963 [2024-07-15 20:27:11.274393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.963 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.963 [2024-07-15 20:27:11.291301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.963 [2024-07-15 20:27:11.291359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.963 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.963 [2024-07-15 20:27:11.307027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.963 [2024-07-15 20:27:11.307077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.963 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.963 [2024-07-15 20:27:11.317477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.963 [2024-07-15 20:27:11.317521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.963 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.963 [2024-07-15 20:27:11.329006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.963 [2024-07-15 20:27:11.329051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.963 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.963 [2024-07-15 20:27:11.344850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.963 [2024-07-15 20:27:11.344913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.963 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.963 [2024-07-15 20:27:11.360807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.963 [2024-07-15 20:27:11.360864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.963 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.963 [2024-07-15 20:27:11.371526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.964 [2024-07-15 20:27:11.371574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.964 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.964 [2024-07-15 20:27:11.386560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.964 [2024-07-15 20:27:11.386608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.964 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.964 [2024-07-15 20:27:11.403062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.964 [2024-07-15 20:27:11.403115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.964 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.964 [2024-07-15 20:27:11.420377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.964 [2024-07-15 20:27:11.420436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.964 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.964 [2024-07-15 20:27:11.436009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.964 [2024-07-15 20:27:11.436062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.964 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.964 [2024-07-15 20:27:11.446729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.964 [2024-07-15 20:27:11.446780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.964 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:49.964 [2024-07-15 20:27:11.457953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:49.964 [2024-07-15 20:27:11.458007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.964 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.221 [2024-07-15 20:27:11.475149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.221 [2024-07-15 20:27:11.475216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.221 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.221 [2024-07-15 20:27:11.492444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.221 [2024-07-15 20:27:11.492514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.221 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.221 [2024-07-15 20:27:11.508551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.221 [2024-07-15 20:27:11.508622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.221 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.221 [2024-07-15 20:27:11.525262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.221 [2024-07-15 20:27:11.525339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.221 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.221 [2024-07-15 20:27:11.541715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.221 [2024-07-15 20:27:11.541786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.221 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.221 [2024-07-15 20:27:11.558793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.221 [2024-07-15 20:27:11.558884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.221 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.221 [2024-07-15 20:27:11.575195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.221 [2024-07-15 20:27:11.575269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.221 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.221 [2024-07-15 20:27:11.591245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.221 [2024-07-15 20:27:11.591333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.221 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.221 [2024-07-15 20:27:11.601447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.221 [2024-07-15 20:27:11.601515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.221 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.221 [2024-07-15 20:27:11.616305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.221 [2024-07-15 20:27:11.616368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.221 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.221 [2024-07-15 20:27:11.627440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.221 [2024-07-15 20:27:11.627493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.221 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.221 [2024-07-15 20:27:11.642388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.221 [2024-07-15 20:27:11.642454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.221 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.221 [2024-07-15 20:27:11.659700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.221 [2024-07-15 20:27:11.659774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.221 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.221 [2024-07-15 20:27:11.676559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.221 [2024-07-15 20:27:11.676649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.221 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.221 [2024-07-15 20:27:11.692526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.221 [2024-07-15 20:27:11.692579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.221 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.221 [2024-07-15 20:27:11.708655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.221 [2024-07-15 20:27:11.708703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.221 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.221 [2024-07-15 20:27:11.719196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.221 [2024-07-15 20:27:11.719254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.478 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.478 [2024-07-15 20:27:11.730921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.478 [2024-07-15 20:27:11.730967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.478 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.478 [2024-07-15 20:27:11.742372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.478 [2024-07-15 20:27:11.742428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.478 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.478 [2024-07-15 20:27:11.758550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.478 [2024-07-15 20:27:11.758601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.478 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.478 [2024-07-15 20:27:11.768140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.478 [2024-07-15 20:27:11.768181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.478 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.478 [2024-07-15 20:27:11.784079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.478 [2024-07-15 20:27:11.784126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.478 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.478 [2024-07-15 20:27:11.795627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.478 [2024-07-15 20:27:11.795670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.478 00:10:50.478 Latency(us) 00:10:50.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.478 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:50.478 Nvme1n1 : 5.01 11207.47 87.56 0.00 0.00 11406.68 4498.15 20614.05 00:10:50.478 =================================================================================================================== 00:10:50.478 Total : 11207.47 87.56 0.00 0.00 11406.68 4498.15 20614.05 00:10:50.478 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.478 [2024-07-15 20:27:11.803619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.478 [2024-07-15 20:27:11.803658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.478 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.478 [2024-07-15 20:27:11.811606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.478 [2024-07-15 20:27:11.811643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.478 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.478 [2024-07-15 20:27:11.823644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.478 [2024-07-15 20:27:11.823694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.478 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.478 [2024-07-15 20:27:11.831637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.478 [2024-07-15 20:27:11.831688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.478 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.478 [2024-07-15 20:27:11.843660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.478 [2024-07-15 20:27:11.843711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.478 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.479 [2024-07-15 20:27:11.855665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.479 [2024-07-15 20:27:11.855721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.479 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.479 [2024-07-15 20:27:11.867656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.479 [2024-07-15 20:27:11.867707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.479 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.479 [2024-07-15 20:27:11.879659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.479 [2024-07-15 20:27:11.879701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.479 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.479 [2024-07-15 20:27:11.887634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.479 [2024-07-15 20:27:11.887684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.479 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.479 [2024-07-15 20:27:11.899681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.479 [2024-07-15 20:27:11.899737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.479 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.479 [2024-07-15 20:27:11.911663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.479 [2024-07-15 20:27:11.911706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.479 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.479 [2024-07-15 20:27:11.923671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.479 [2024-07-15 20:27:11.923713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.479 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.479 [2024-07-15 20:27:11.935693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.479 [2024-07-15 20:27:11.935742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.479 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.479 [2024-07-15 20:27:11.947671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.479 [2024-07-15 20:27:11.947713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.479 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.479 [2024-07-15 20:27:11.959679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.479 [2024-07-15 20:27:11.959718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.479 2024/07/15 20:27:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:50.479 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (76081) - No such process 00:10:50.479 20:27:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 76081 00:10:50.479 20:27:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.479 20:27:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.479 20:27:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:50.479 20:27:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.479 20:27:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:50.479 20:27:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.479 20:27:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:50.736 delay0 00:10:50.737 20:27:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.737 20:27:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:50.737 20:27:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.737 20:27:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:50.737 20:27:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.737 20:27:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:50.737 [2024-07-15 20:27:12.159795] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:58.842 Initializing NVMe Controllers 00:10:58.842 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:58.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:58.842 Initialization complete. Launching workers. 00:10:58.842 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 249, failed: 20770 00:10:58.842 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 20889, failed to submit 130 00:10:58.842 success 20816, unsuccess 73, failed 0 00:10:58.842 20:27:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:58.842 20:27:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:58.842 20:27:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:58.842 20:27:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:58.842 20:27:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:58.842 20:27:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:58.842 20:27:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:58.842 20:27:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:58.842 rmmod nvme_tcp 00:10:58.842 rmmod nvme_fabrics 00:10:58.842 rmmod nvme_keyring 00:10:58.842 20:27:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 75906 ']' 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 75906 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 75906 ']' 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 75906 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75906 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:58.842 killing process with pid 75906 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75906' 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 75906 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 75906 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:58.842 00:10:58.842 real 0m25.095s 00:10:58.842 user 0m39.454s 00:10:58.842 sys 0m7.409s 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:58.842 20:27:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:58.842 ************************************ 00:10:58.842 END TEST nvmf_zcopy 00:10:58.842 ************************************ 00:10:58.842 20:27:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:58.842 20:27:19 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:58.842 20:27:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:58.842 20:27:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.842 20:27:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:58.842 ************************************ 00:10:58.842 START TEST nvmf_nmic 00:10:58.842 ************************************ 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:58.842 * Looking for test storage... 00:10:58.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:58.842 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:58.843 Cannot find device "nvmf_tgt_br" 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:58.843 Cannot find device "nvmf_tgt_br2" 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:58.843 Cannot find device "nvmf_tgt_br" 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:58.843 Cannot find device "nvmf_tgt_br2" 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:58.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:58.843 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:58.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:58.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:10:58.843 00:10:58.843 --- 10.0.0.2 ping statistics --- 00:10:58.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.843 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:58.843 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:58.843 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:10:58.843 00:10:58.843 --- 10.0.0.3 ping statistics --- 00:10:58.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.843 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:58.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:58.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:10:58.843 00:10:58.843 --- 10.0.0.1 ping statistics --- 00:10:58.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:58.843 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=76408 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 76408 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 76408 ']' 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:58.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:58.843 20:27:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:58.843 [2024-07-15 20:27:19.805774] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:10:58.843 [2024-07-15 20:27:19.805894] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:58.843 [2024-07-15 20:27:19.936787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:58.843 [2024-07-15 20:27:19.997894] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:58.843 [2024-07-15 20:27:19.998165] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:58.843 [2024-07-15 20:27:19.998373] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:58.843 [2024-07-15 20:27:19.998504] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:58.843 [2024-07-15 20:27:19.998657] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:58.843 [2024-07-15 20:27:19.998886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:58.843 [2024-07-15 20:27:19.998970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:58.843 [2024-07-15 20:27:19.999127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:58.843 [2024-07-15 20:27:19.999132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.414 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:59.414 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:10:59.414 20:27:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:59.414 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:59.414 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:59.414 20:27:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.414 20:27:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:59.414 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.414 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:59.414 [2024-07-15 20:27:20.826614] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:59.415 Malloc0 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:59.415 [2024-07-15 20:27:20.891605] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.415 test case1: single bdev can't be used in multiple subsystems 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.415 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:59.672 [2024-07-15 20:27:20.919491] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:59.672 [2024-07-15 20:27:20.919545] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:59.672 [2024-07-15 20:27:20.919558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.672 2024/07/15 20:27:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:10:59.672 request: 00:10:59.672 { 00:10:59.672 "method": "nvmf_subsystem_add_ns", 00:10:59.672 "params": { 00:10:59.672 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:59.672 "namespace": { 00:10:59.672 "bdev_name": "Malloc0", 00:10:59.672 "no_auto_visible": false 00:10:59.672 } 00:10:59.672 } 00:10:59.672 } 00:10:59.672 Got JSON-RPC error response 00:10:59.672 GoRPCClient: error on JSON-RPC call 00:10:59.672 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:59.672 20:27:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:59.672 20:27:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:59.672 Adding namespace failed - expected result. 00:10:59.672 20:27:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:59.672 test case2: host connect to nvmf target in multiple paths 00:10:59.672 20:27:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:59.672 20:27:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:59.672 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.672 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:59.672 [2024-07-15 20:27:20.935722] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:59.672 20:27:20 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.672 20:27:20 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:59.672 20:27:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:59.931 20:27:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:59.931 20:27:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:59.931 20:27:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:59.931 20:27:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:59.931 20:27:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:01.827 20:27:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:01.827 20:27:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:01.827 20:27:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:01.827 20:27:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:01.827 20:27:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.827 20:27:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:01.827 20:27:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:01.827 [global] 00:11:01.827 thread=1 00:11:01.827 invalidate=1 00:11:01.827 rw=write 00:11:01.827 time_based=1 00:11:01.827 runtime=1 00:11:01.827 ioengine=libaio 00:11:01.827 direct=1 00:11:01.827 bs=4096 00:11:01.827 iodepth=1 00:11:01.827 norandommap=0 00:11:01.827 numjobs=1 00:11:01.827 00:11:01.827 verify_dump=1 00:11:01.827 verify_backlog=512 00:11:01.827 verify_state_save=0 00:11:01.827 do_verify=1 00:11:01.827 verify=crc32c-intel 00:11:01.827 [job0] 00:11:01.827 filename=/dev/nvme0n1 00:11:02.085 Could not set queue depth (nvme0n1) 00:11:02.085 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.085 fio-3.35 00:11:02.085 Starting 1 thread 00:11:03.475 00:11:03.475 job0: (groupid=0, jobs=1): err= 0: pid=76518: Mon Jul 15 20:27:24 2024 00:11:03.475 read: IOPS=3194, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec) 00:11:03.475 slat (nsec): min=13925, max=44264, avg=16088.38, stdev=2496.70 00:11:03.475 clat (usec): min=130, max=428, avg=148.08, stdev=11.55 00:11:03.475 lat (usec): min=145, max=443, avg=164.16, stdev=11.99 00:11:03.475 clat percentiles (usec): 00:11:03.475 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:11:03.475 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:11:03.475 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 167], 00:11:03.475 | 99.00th=[ 184], 99.50th=[ 204], 99.90th=[ 225], 99.95th=[ 258], 00:11:03.475 | 99.99th=[ 429] 00:11:03.475 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:03.475 slat (usec): min=20, max=124, avg=24.52, stdev= 6.10 00:11:03.475 clat (usec): min=89, max=681, avg=104.47, stdev=14.36 00:11:03.475 lat (usec): min=113, max=716, avg=128.99, stdev=17.01 00:11:03.475 clat percentiles (usec): 00:11:03.475 | 1.00th=[ 93], 5.00th=[ 95], 10.00th=[ 96], 20.00th=[ 98], 00:11:03.475 | 30.00th=[ 99], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 103], 00:11:03.475 | 70.00th=[ 105], 80.00th=[ 110], 90.00th=[ 117], 95.00th=[ 124], 00:11:03.475 | 99.00th=[ 147], 99.50th=[ 163], 99.90th=[ 184], 99.95th=[ 253], 00:11:03.475 | 99.99th=[ 685] 00:11:03.475 bw ( KiB/s): min=14816, max=14816, per=100.00%, avg=14816.00, stdev= 0.00, samples=1 00:11:03.475 iops : min= 3704, max= 3704, avg=3704.00, stdev= 0.00, samples=1 00:11:03.475 lat (usec) : 100=20.76%, 250=79.18%, 500=0.04%, 750=0.01% 00:11:03.475 cpu : usr=3.00%, sys=9.90%, ctx=6782, majf=0, minf=2 00:11:03.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.475 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.475 00:11:03.475 Run status group 0 (all jobs): 00:11:03.475 READ: bw=12.5MiB/s (13.1MB/s), 12.5MiB/s-12.5MiB/s (13.1MB/s-13.1MB/s), io=12.5MiB (13.1MB), run=1001-1001msec 00:11:03.475 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:11:03.475 00:11:03.475 Disk stats (read/write): 00:11:03.475 nvme0n1: ios=3027/3072, merge=0/0, ticks=465/346, in_queue=811, util=91.28% 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:03.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:03.475 rmmod nvme_tcp 00:11:03.475 rmmod nvme_fabrics 00:11:03.475 rmmod nvme_keyring 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 76408 ']' 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 76408 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 76408 ']' 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 76408 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76408 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76408' 00:11:03.475 killing process with pid 76408 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 76408 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 76408 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.475 20:27:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:03.475 00:11:03.475 real 0m5.692s 00:11:03.475 user 0m19.270s 00:11:03.475 sys 0m1.251s 00:11:03.735 20:27:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:03.735 20:27:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:03.735 ************************************ 00:11:03.735 END TEST nvmf_nmic 00:11:03.735 ************************************ 00:11:03.735 20:27:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:03.735 20:27:25 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:03.735 20:27:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:03.735 20:27:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:03.735 20:27:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:03.735 ************************************ 00:11:03.735 START TEST nvmf_fio_target 00:11:03.735 ************************************ 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:03.735 * Looking for test storage... 00:11:03.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:03.735 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:03.736 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.736 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:03.736 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:03.736 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:03.736 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:03.736 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:03.736 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:03.736 Cannot find device "nvmf_tgt_br" 00:11:03.736 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:11:03.736 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:03.736 Cannot find device "nvmf_tgt_br2" 00:11:03.736 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:11:03.736 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:03.736 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:03.736 Cannot find device "nvmf_tgt_br" 00:11:03.736 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:11:03.736 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:03.736 Cannot find device "nvmf_tgt_br2" 00:11:03.736 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:11:03.736 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:03.736 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:03.995 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:03.995 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:03.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:11:03.995 00:11:03.995 --- 10.0.0.2 ping statistics --- 00:11:03.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.995 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:03.995 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:03.995 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:11:03.995 00:11:03.995 --- 10.0.0.3 ping statistics --- 00:11:03.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.995 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:03.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:03.995 00:11:03.995 --- 10.0.0.1 ping statistics --- 00:11:03.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.995 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=76697 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 76697 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 76697 ']' 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.995 20:27:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:04.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.254 20:27:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.254 20:27:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:04.254 20:27:25 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.254 [2024-07-15 20:27:25.573052] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:11:04.254 [2024-07-15 20:27:25.573664] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.254 [2024-07-15 20:27:25.716652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.512 [2024-07-15 20:27:25.774996] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.512 [2024-07-15 20:27:25.775041] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.512 [2024-07-15 20:27:25.775051] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.512 [2024-07-15 20:27:25.775059] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.512 [2024-07-15 20:27:25.775066] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.512 [2024-07-15 20:27:25.776014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.512 [2024-07-15 20:27:25.776233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.512 [2024-07-15 20:27:25.776292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.512 [2024-07-15 20:27:25.776285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.447 20:27:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:05.447 20:27:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:11:05.447 20:27:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:05.447 20:27:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:05.447 20:27:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.447 20:27:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.447 20:27:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:05.447 [2024-07-15 20:27:26.930353] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.744 20:27:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:05.744 20:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:05.744 20:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:06.002 20:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:06.002 20:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:06.591 20:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:06.591 20:27:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:06.591 20:27:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:06.591 20:27:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:06.849 20:27:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.415 20:27:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:07.415 20:27:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.672 20:27:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:07.673 20:27:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:07.931 20:27:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:07.931 20:27:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:08.188 20:27:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:08.446 20:27:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:08.446 20:27:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:08.705 20:27:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:08.705 20:27:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:08.962 20:27:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.219 [2024-07-15 20:27:30.485169] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.219 20:27:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:09.477 20:27:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:09.735 20:27:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:09.735 20:27:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:09.735 20:27:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:09.735 20:27:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:09.735 20:27:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:09.735 20:27:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:09.735 20:27:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:12.263 20:27:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:12.263 20:27:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:12.263 20:27:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:12.263 20:27:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:12.263 20:27:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:12.263 20:27:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:12.263 20:27:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:12.263 [global] 00:11:12.263 thread=1 00:11:12.263 invalidate=1 00:11:12.263 rw=write 00:11:12.263 time_based=1 00:11:12.263 runtime=1 00:11:12.263 ioengine=libaio 00:11:12.263 direct=1 00:11:12.263 bs=4096 00:11:12.263 iodepth=1 00:11:12.263 norandommap=0 00:11:12.263 numjobs=1 00:11:12.263 00:11:12.263 verify_dump=1 00:11:12.263 verify_backlog=512 00:11:12.263 verify_state_save=0 00:11:12.263 do_verify=1 00:11:12.263 verify=crc32c-intel 00:11:12.263 [job0] 00:11:12.263 filename=/dev/nvme0n1 00:11:12.263 [job1] 00:11:12.263 filename=/dev/nvme0n2 00:11:12.263 [job2] 00:11:12.263 filename=/dev/nvme0n3 00:11:12.263 [job3] 00:11:12.263 filename=/dev/nvme0n4 00:11:12.263 Could not set queue depth (nvme0n1) 00:11:12.263 Could not set queue depth (nvme0n2) 00:11:12.263 Could not set queue depth (nvme0n3) 00:11:12.263 Could not set queue depth (nvme0n4) 00:11:12.263 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.263 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.263 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.263 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:12.263 fio-3.35 00:11:12.263 Starting 4 threads 00:11:13.197 00:11:13.197 job0: (groupid=0, jobs=1): err= 0: pid=76990: Mon Jul 15 20:27:34 2024 00:11:13.197 read: IOPS=2790, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec) 00:11:13.197 slat (nsec): min=13496, max=54184, avg=16319.13, stdev=2970.69 00:11:13.197 clat (usec): min=147, max=330, avg=171.44, stdev=11.51 00:11:13.197 lat (usec): min=162, max=346, avg=187.76, stdev=12.11 00:11:13.197 clat percentiles (usec): 00:11:13.197 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:11:13.197 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:11:13.197 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 190], 00:11:13.197 | 99.00th=[ 208], 99.50th=[ 223], 99.90th=[ 251], 99.95th=[ 262], 00:11:13.197 | 99.99th=[ 330] 00:11:13.197 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:13.197 slat (nsec): min=19520, max=98230, avg=23441.32, stdev=3674.50 00:11:13.197 clat (usec): min=103, max=660, avg=127.82, stdev=16.77 00:11:13.197 lat (usec): min=124, max=697, avg=151.26, stdev=17.55 00:11:13.197 clat percentiles (usec): 00:11:13.197 | 1.00th=[ 111], 5.00th=[ 114], 10.00th=[ 116], 20.00th=[ 120], 00:11:13.197 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 129], 00:11:13.197 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 145], 00:11:13.197 | 99.00th=[ 161], 99.50th=[ 182], 99.90th=[ 318], 99.95th=[ 375], 00:11:13.197 | 99.99th=[ 660] 00:11:13.197 bw ( KiB/s): min=12288, max=12288, per=31.00%, avg=12288.00, stdev= 0.00, samples=1 00:11:13.197 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:13.197 lat (usec) : 250=99.80%, 500=0.19%, 750=0.02% 00:11:13.197 cpu : usr=2.60%, sys=8.40%, ctx=5865, majf=0, minf=13 00:11:13.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.197 issued rwts: total=2793,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.197 job1: (groupid=0, jobs=1): err= 0: pid=76991: Mon Jul 15 20:27:34 2024 00:11:13.197 read: IOPS=1622, BW=6490KiB/s (6645kB/s)(6496KiB/1001msec) 00:11:13.197 slat (nsec): min=11903, max=44097, avg=13575.81, stdev=3257.48 00:11:13.197 clat (usec): min=240, max=425, avg=286.77, stdev=16.05 00:11:13.197 lat (usec): min=259, max=437, avg=300.35, stdev=16.12 00:11:13.197 clat percentiles (usec): 00:11:13.197 | 1.00th=[ 260], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:11:13.197 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:11:13.197 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 310], 00:11:13.197 | 99.00th=[ 330], 99.50th=[ 371], 99.90th=[ 416], 99.95th=[ 424], 00:11:13.197 | 99.99th=[ 424] 00:11:13.197 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:13.197 slat (usec): min=14, max=106, avg=23.62, stdev= 6.72 00:11:13.197 clat (usec): min=119, max=395, avg=223.32, stdev=15.72 00:11:13.197 lat (usec): min=146, max=420, avg=246.95, stdev=15.00 00:11:13.197 clat percentiles (usec): 00:11:13.197 | 1.00th=[ 188], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:11:13.197 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 227], 00:11:13.197 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 247], 00:11:13.197 | 99.00th=[ 262], 99.50th=[ 269], 99.90th=[ 322], 99.95th=[ 330], 00:11:13.197 | 99.99th=[ 396] 00:11:13.197 bw ( KiB/s): min= 8192, max= 8192, per=20.67%, avg=8192.00, stdev= 0.00, samples=1 00:11:13.197 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:13.197 lat (usec) : 250=54.00%, 500=46.00% 00:11:13.197 cpu : usr=1.10%, sys=5.90%, ctx=3674, majf=0, minf=5 00:11:13.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.197 issued rwts: total=1624,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.197 job2: (groupid=0, jobs=1): err= 0: pid=76997: Mon Jul 15 20:27:34 2024 00:11:13.197 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:13.197 slat (usec): min=13, max=111, avg=19.17, stdev= 5.61 00:11:13.197 clat (usec): min=156, max=1952, avg=182.63, stdev=42.54 00:11:13.197 lat (usec): min=170, max=1969, avg=201.81, stdev=43.27 00:11:13.197 clat percentiles (usec): 00:11:13.197 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:11:13.197 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 182], 00:11:13.197 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 200], 00:11:13.197 | 99.00th=[ 221], 99.50th=[ 251], 99.90th=[ 519], 99.95th=[ 1029], 00:11:13.197 | 99.99th=[ 1958] 00:11:13.197 write: IOPS=2748, BW=10.7MiB/s (11.3MB/s)(10.7MiB/1001msec); 0 zone resets 00:11:13.197 slat (nsec): min=19952, max=94240, avg=28315.61, stdev=7854.79 00:11:13.197 clat (usec): min=113, max=464, avg=143.27, stdev=21.69 00:11:13.197 lat (usec): min=136, max=496, avg=171.59, stdev=26.23 00:11:13.197 clat percentiles (usec): 00:11:13.197 | 1.00th=[ 120], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 130], 00:11:13.197 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:11:13.197 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 172], 95.00th=[ 190], 00:11:13.197 | 99.00th=[ 210], 99.50th=[ 219], 99.90th=[ 359], 99.95th=[ 375], 00:11:13.197 | 99.99th=[ 465] 00:11:13.197 bw ( KiB/s): min=12288, max=12288, per=31.00%, avg=12288.00, stdev= 0.00, samples=1 00:11:13.197 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:13.197 lat (usec) : 250=99.64%, 500=0.30%, 750=0.02% 00:11:13.197 lat (msec) : 2=0.04% 00:11:13.197 cpu : usr=2.00%, sys=10.10%, ctx=5312, majf=0, minf=4 00:11:13.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.197 issued rwts: total=2560,2751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.197 job3: (groupid=0, jobs=1): err= 0: pid=76999: Mon Jul 15 20:27:34 2024 00:11:13.197 read: IOPS=1623, BW=6494KiB/s (6649kB/s)(6500KiB/1001msec) 00:11:13.197 slat (nsec): min=13776, max=30431, avg=15249.76, stdev=2354.55 00:11:13.197 clat (usec): min=173, max=426, avg=284.97, stdev=15.79 00:11:13.197 lat (usec): min=197, max=441, avg=300.22, stdev=15.79 00:11:13.197 clat percentiles (usec): 00:11:13.197 | 1.00th=[ 260], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:11:13.197 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:11:13.197 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 310], 00:11:13.197 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 420], 99.95th=[ 429], 00:11:13.197 | 99.99th=[ 429] 00:11:13.197 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:13.197 slat (usec): min=14, max=109, avg=23.52, stdev= 5.95 00:11:13.197 clat (usec): min=128, max=393, avg=223.45, stdev=15.56 00:11:13.197 lat (usec): min=149, max=415, avg=246.97, stdev=14.33 00:11:13.197 clat percentiles (usec): 00:11:13.197 | 1.00th=[ 186], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 212], 00:11:13.197 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 227], 00:11:13.197 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 241], 95.00th=[ 247], 00:11:13.197 | 99.00th=[ 260], 99.50th=[ 265], 99.90th=[ 330], 99.95th=[ 351], 00:11:13.197 | 99.99th=[ 392] 00:11:13.197 bw ( KiB/s): min= 8192, max= 8192, per=20.67%, avg=8192.00, stdev= 0.00, samples=1 00:11:13.197 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:13.197 lat (usec) : 250=54.02%, 500=45.98% 00:11:13.197 cpu : usr=1.50%, sys=5.50%, ctx=3674, majf=0, minf=13 00:11:13.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:13.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:13.197 issued rwts: total=1625,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:13.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:13.197 00:11:13.197 Run status group 0 (all jobs): 00:11:13.197 READ: bw=33.6MiB/s (35.2MB/s), 6490KiB/s-10.9MiB/s (6645kB/s-11.4MB/s), io=33.6MiB (35.2MB), run=1001-1001msec 00:11:13.197 WRITE: bw=38.7MiB/s (40.6MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=38.7MiB (40.6MB), run=1001-1001msec 00:11:13.197 00:11:13.197 Disk stats (read/write): 00:11:13.197 nvme0n1: ios=2495/2560, merge=0/0, ticks=462/345, in_queue=807, util=87.17% 00:11:13.197 nvme0n2: ios=1551/1579, merge=0/0, ticks=443/351, in_queue=794, util=87.77% 00:11:13.197 nvme0n3: ios=2048/2520, merge=0/0, ticks=392/398, in_queue=790, util=89.19% 00:11:13.198 nvme0n4: ios=1536/1579, merge=0/0, ticks=441/376, in_queue=817, util=89.64% 00:11:13.198 20:27:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:13.198 [global] 00:11:13.198 thread=1 00:11:13.198 invalidate=1 00:11:13.198 rw=randwrite 00:11:13.198 time_based=1 00:11:13.198 runtime=1 00:11:13.198 ioengine=libaio 00:11:13.198 direct=1 00:11:13.198 bs=4096 00:11:13.198 iodepth=1 00:11:13.198 norandommap=0 00:11:13.198 numjobs=1 00:11:13.198 00:11:13.198 verify_dump=1 00:11:13.198 verify_backlog=512 00:11:13.198 verify_state_save=0 00:11:13.198 do_verify=1 00:11:13.198 verify=crc32c-intel 00:11:13.198 [job0] 00:11:13.198 filename=/dev/nvme0n1 00:11:13.198 [job1] 00:11:13.198 filename=/dev/nvme0n2 00:11:13.198 [job2] 00:11:13.198 filename=/dev/nvme0n3 00:11:13.198 [job3] 00:11:13.198 filename=/dev/nvme0n4 00:11:13.198 Could not set queue depth (nvme0n1) 00:11:13.198 Could not set queue depth (nvme0n2) 00:11:13.198 Could not set queue depth (nvme0n3) 00:11:13.198 Could not set queue depth (nvme0n4) 00:11:13.454 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.454 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.454 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.454 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:13.454 fio-3.35 00:11:13.455 Starting 4 threads 00:11:14.830 00:11:14.830 job0: (groupid=0, jobs=1): err= 0: pid=77052: Mon Jul 15 20:27:35 2024 00:11:14.830 read: IOPS=2844, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec) 00:11:14.830 slat (nsec): min=13469, max=64215, avg=16959.51, stdev=3810.10 00:11:14.830 clat (usec): min=144, max=340, avg=166.93, stdev=10.13 00:11:14.830 lat (usec): min=160, max=367, avg=183.89, stdev=11.02 00:11:14.830 clat percentiles (usec): 00:11:14.830 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 159], 00:11:14.830 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:11:14.830 | 70.00th=[ 172], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 184], 00:11:14.830 | 99.00th=[ 192], 99.50th=[ 198], 99.90th=[ 223], 99.95th=[ 314], 00:11:14.830 | 99.99th=[ 343] 00:11:14.830 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:14.830 slat (usec): min=19, max=131, avg=23.74, stdev= 5.48 00:11:14.830 clat (usec): min=56, max=704, avg=127.74, stdev=17.41 00:11:14.830 lat (usec): min=126, max=739, avg=151.47, stdev=18.78 00:11:14.830 clat percentiles (usec): 00:11:14.830 | 1.00th=[ 111], 5.00th=[ 115], 10.00th=[ 117], 20.00th=[ 121], 00:11:14.830 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 129], 00:11:14.830 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 139], 95.00th=[ 143], 00:11:14.830 | 99.00th=[ 155], 99.50th=[ 163], 99.90th=[ 379], 99.95th=[ 465], 00:11:14.830 | 99.99th=[ 701] 00:11:14.830 bw ( KiB/s): min=12288, max=12288, per=31.18%, avg=12288.00, stdev= 0.00, samples=1 00:11:14.830 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:14.830 lat (usec) : 100=0.02%, 250=99.81%, 500=0.15%, 750=0.02% 00:11:14.830 cpu : usr=2.60%, sys=8.90%, ctx=5923, majf=0, minf=10 00:11:14.830 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.830 issued rwts: total=2847,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.830 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.830 job1: (groupid=0, jobs=1): err= 0: pid=77053: Mon Jul 15 20:27:35 2024 00:11:14.830 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:14.830 slat (usec): min=11, max=245, avg=14.83, stdev= 7.50 00:11:14.830 clat (usec): min=53, max=41302, avg=320.65, stdev=1046.85 00:11:14.830 lat (usec): min=267, max=41319, avg=335.48, stdev=1046.91 00:11:14.830 clat percentiles (usec): 00:11:14.830 | 1.00th=[ 255], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 277], 00:11:14.830 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:11:14.830 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 334], 00:11:14.830 | 99.00th=[ 396], 99.50th=[ 506], 99.90th=[ 947], 99.95th=[41157], 00:11:14.830 | 99.99th=[41157] 00:11:14.830 write: IOPS=1857, BW=7429KiB/s (7607kB/s)(7436KiB/1001msec); 0 zone resets 00:11:14.830 slat (usec): min=12, max=557, avg=26.78, stdev=30.03 00:11:14.830 clat (usec): min=2, max=531, avg=230.27, stdev=33.66 00:11:14.830 lat (usec): min=136, max=616, avg=257.05, stdev=30.65 00:11:14.830 clat percentiles (usec): 00:11:14.830 | 1.00th=[ 129], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 219], 00:11:14.830 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 233], 00:11:14.830 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 265], 00:11:14.830 | 99.00th=[ 347], 99.50th=[ 383], 99.90th=[ 529], 99.95th=[ 529], 00:11:14.831 | 99.99th=[ 529] 00:11:14.831 bw ( KiB/s): min= 8192, max= 8192, per=20.79%, avg=8192.00, stdev= 0.00, samples=1 00:11:14.831 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:14.831 lat (usec) : 4=0.35%, 50=0.03%, 100=0.06%, 250=48.69%, 500=50.54% 00:11:14.831 lat (usec) : 750=0.27%, 1000=0.03% 00:11:14.831 lat (msec) : 50=0.03% 00:11:14.831 cpu : usr=1.40%, sys=5.50%, ctx=3470, majf=0, minf=9 00:11:14.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.831 issued rwts: total=1536,1859,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.831 job2: (groupid=0, jobs=1): err= 0: pid=77054: Mon Jul 15 20:27:35 2024 00:11:14.831 read: IOPS=2651, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec) 00:11:14.831 slat (nsec): min=13317, max=40500, avg=15567.55, stdev=1919.54 00:11:14.831 clat (usec): min=151, max=291, avg=176.08, stdev=15.73 00:11:14.831 lat (usec): min=166, max=314, avg=191.65, stdev=15.85 00:11:14.831 clat percentiles (usec): 00:11:14.831 | 1.00th=[ 157], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 165], 00:11:14.831 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 176], 00:11:14.831 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 198], 00:11:14.831 | 99.00th=[ 253], 99.50th=[ 265], 99.90th=[ 281], 99.95th=[ 281], 00:11:14.831 | 99.99th=[ 293] 00:11:14.831 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:14.831 slat (usec): min=19, max=126, avg=22.51, stdev= 4.93 00:11:14.831 clat (usec): min=114, max=596, avg=134.13, stdev=15.06 00:11:14.831 lat (usec): min=134, max=628, avg=156.65, stdev=16.65 00:11:14.831 clat percentiles (usec): 00:11:14.831 | 1.00th=[ 119], 5.00th=[ 122], 10.00th=[ 124], 20.00th=[ 126], 00:11:14.831 | 30.00th=[ 128], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 135], 00:11:14.831 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 151], 00:11:14.831 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 245], 99.95th=[ 445], 00:11:14.831 | 99.99th=[ 594] 00:11:14.831 bw ( KiB/s): min=12288, max=12288, per=31.18%, avg=12288.00, stdev= 0.00, samples=1 00:11:14.831 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:14.831 lat (usec) : 250=99.41%, 500=0.58%, 750=0.02% 00:11:14.831 cpu : usr=2.10%, sys=8.30%, ctx=5727, majf=0, minf=15 00:11:14.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.831 issued rwts: total=2654,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.831 job3: (groupid=0, jobs=1): err= 0: pid=77055: Mon Jul 15 20:27:35 2024 00:11:14.831 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:14.831 slat (nsec): min=10267, max=40829, avg=16158.08, stdev=2959.75 00:11:14.831 clat (usec): min=229, max=41358, avg=319.49, stdev=1048.31 00:11:14.831 lat (usec): min=260, max=41369, avg=335.65, stdev=1048.17 00:11:14.831 clat percentiles (usec): 00:11:14.831 | 1.00th=[ 258], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 277], 00:11:14.831 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:11:14.831 | 70.00th=[ 297], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 338], 00:11:14.831 | 99.00th=[ 396], 99.50th=[ 515], 99.90th=[ 922], 99.95th=[41157], 00:11:14.831 | 99.99th=[41157] 00:11:14.831 write: IOPS=1856, BW=7425KiB/s (7603kB/s)(7432KiB/1001msec); 0 zone resets 00:11:14.831 slat (usec): min=12, max=452, avg=26.13, stdev=24.48 00:11:14.831 clat (usec): min=2, max=529, avg=231.13, stdev=32.20 00:11:14.831 lat (usec): min=143, max=598, avg=257.26, stdev=31.32 00:11:14.831 clat percentiles (usec): 00:11:14.831 | 1.00th=[ 149], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 219], 00:11:14.831 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 229], 60.00th=[ 233], 00:11:14.831 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 265], 00:11:14.831 | 99.00th=[ 363], 99.50th=[ 420], 99.90th=[ 519], 99.95th=[ 529], 00:11:14.831 | 99.99th=[ 529] 00:11:14.831 bw ( KiB/s): min= 8192, max= 8192, per=20.79%, avg=8192.00, stdev= 0.00, samples=1 00:11:14.831 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:14.831 lat (usec) : 4=0.09%, 10=0.03%, 50=0.06%, 100=0.03%, 250=48.91% 00:11:14.831 lat (usec) : 500=50.53%, 750=0.29%, 1000=0.03% 00:11:14.831 lat (msec) : 50=0.03% 00:11:14.831 cpu : usr=1.90%, sys=4.90%, ctx=3441, majf=0, minf=11 00:11:14.831 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.831 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.831 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.831 issued rwts: total=1536,1858,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.831 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.831 00:11:14.831 Run status group 0 (all jobs): 00:11:14.831 READ: bw=33.5MiB/s (35.1MB/s), 6138KiB/s-11.1MiB/s (6285kB/s-11.6MB/s), io=33.5MiB (35.1MB), run=1001-1001msec 00:11:14.831 WRITE: bw=38.5MiB/s (40.3MB/s), 7425KiB/s-12.0MiB/s (7603kB/s-12.6MB/s), io=38.5MiB (40.4MB), run=1001-1001msec 00:11:14.831 00:11:14.831 Disk stats (read/write): 00:11:14.831 nvme0n1: ios=2610/2583, merge=0/0, ticks=472/358, in_queue=830, util=88.98% 00:11:14.831 nvme0n2: ios=1427/1536, merge=0/0, ticks=475/374, in_queue=849, util=88.79% 00:11:14.831 nvme0n3: ios=2396/2560, merge=0/0, ticks=428/370, in_queue=798, util=89.33% 00:11:14.831 nvme0n4: ios=1387/1536, merge=0/0, ticks=447/375, in_queue=822, util=89.79% 00:11:14.831 20:27:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:14.831 [global] 00:11:14.831 thread=1 00:11:14.831 invalidate=1 00:11:14.831 rw=write 00:11:14.831 time_based=1 00:11:14.831 runtime=1 00:11:14.831 ioengine=libaio 00:11:14.831 direct=1 00:11:14.831 bs=4096 00:11:14.831 iodepth=128 00:11:14.831 norandommap=0 00:11:14.831 numjobs=1 00:11:14.831 00:11:14.831 verify_dump=1 00:11:14.831 verify_backlog=512 00:11:14.831 verify_state_save=0 00:11:14.831 do_verify=1 00:11:14.831 verify=crc32c-intel 00:11:14.831 [job0] 00:11:14.831 filename=/dev/nvme0n1 00:11:14.831 [job1] 00:11:14.831 filename=/dev/nvme0n2 00:11:14.831 [job2] 00:11:14.831 filename=/dev/nvme0n3 00:11:14.831 [job3] 00:11:14.831 filename=/dev/nvme0n4 00:11:14.831 Could not set queue depth (nvme0n1) 00:11:14.831 Could not set queue depth (nvme0n2) 00:11:14.831 Could not set queue depth (nvme0n3) 00:11:14.831 Could not set queue depth (nvme0n4) 00:11:14.831 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:14.831 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:14.831 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:14.831 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:14.831 fio-3.35 00:11:14.831 Starting 4 threads 00:11:16.254 00:11:16.254 job0: (groupid=0, jobs=1): err= 0: pid=77109: Mon Jul 15 20:27:37 2024 00:11:16.254 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:11:16.254 slat (usec): min=6, max=5025, avg=82.37, stdev=388.98 00:11:16.254 clat (usec): min=7075, max=15926, avg=11151.82, stdev=1170.13 00:11:16.254 lat (usec): min=7101, max=15952, avg=11234.19, stdev=1204.35 00:11:16.254 clat percentiles (usec): 00:11:16.254 | 1.00th=[ 8291], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10421], 00:11:16.254 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:11:16.254 | 70.00th=[11469], 80.00th=[12125], 90.00th=[12780], 95.00th=[13304], 00:11:16.254 | 99.00th=[14615], 99.50th=[14746], 99.90th=[15926], 99.95th=[15926], 00:11:16.254 | 99.99th=[15926] 00:11:16.254 write: IOPS=6046, BW=23.6MiB/s (24.8MB/s)(23.6MiB/1001msec); 0 zone resets 00:11:16.254 slat (usec): min=9, max=4678, avg=81.18, stdev=420.46 00:11:16.254 clat (usec): min=453, max=16873, avg=10541.93, stdev=1247.15 00:11:16.254 lat (usec): min=3958, max=16920, avg=10623.12, stdev=1300.53 00:11:16.254 clat percentiles (usec): 00:11:16.254 | 1.00th=[ 5342], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:11:16.254 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:11:16.255 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11863], 95.00th=[12387], 00:11:16.255 | 99.00th=[14615], 99.50th=[15139], 99.90th=[15926], 99.95th=[16188], 00:11:16.255 | 99.99th=[16909] 00:11:16.255 bw ( KiB/s): min=24576, max=24576, per=35.59%, avg=24576.00, stdev= 0.00, samples=1 00:11:16.255 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:11:16.255 lat (usec) : 500=0.01% 00:11:16.255 lat (msec) : 4=0.02%, 10=19.18%, 20=80.80% 00:11:16.255 cpu : usr=5.20%, sys=15.60%, ctx=529, majf=0, minf=5 00:11:16.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:16.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.255 issued rwts: total=5632,6053,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.255 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.255 job1: (groupid=0, jobs=1): err= 0: pid=77110: Mon Jul 15 20:27:37 2024 00:11:16.255 read: IOPS=2776, BW=10.8MiB/s (11.4MB/s)(10.9MiB/1003msec) 00:11:16.255 slat (usec): min=3, max=10577, avg=183.72, stdev=810.88 00:11:16.255 clat (usec): min=1287, max=40918, avg=22185.44, stdev=6447.58 00:11:16.255 lat (usec): min=3487, max=40933, avg=22369.16, stdev=6465.18 00:11:16.255 clat percentiles (usec): 00:11:16.255 | 1.00th=[ 6194], 5.00th=[10683], 10.00th=[12649], 20.00th=[16188], 00:11:16.255 | 30.00th=[20317], 40.00th=[22414], 50.00th=[22938], 60.00th=[23462], 00:11:16.255 | 70.00th=[24773], 80.00th=[26084], 90.00th=[30016], 95.00th=[33162], 00:11:16.255 | 99.00th=[40633], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:11:16.255 | 99.99th=[41157] 00:11:16.255 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:11:16.255 slat (usec): min=8, max=7891, avg=152.03, stdev=674.69 00:11:16.255 clat (usec): min=8927, max=41200, avg=21135.62, stdev=6286.47 00:11:16.255 lat (usec): min=8953, max=41221, avg=21287.65, stdev=6307.52 00:11:16.255 clat percentiles (usec): 00:11:16.255 | 1.00th=[ 9634], 5.00th=[12125], 10.00th=[12649], 20.00th=[13566], 00:11:16.255 | 30.00th=[17171], 40.00th=[20579], 50.00th=[23200], 60.00th=[23725], 00:11:16.255 | 70.00th=[23987], 80.00th=[25297], 90.00th=[29230], 95.00th=[30540], 00:11:16.255 | 99.00th=[38011], 99.50th=[38536], 99.90th=[41157], 99.95th=[41157], 00:11:16.255 | 99.99th=[41157] 00:11:16.255 bw ( KiB/s): min=12288, max=12288, per=17.79%, avg=12288.00, stdev= 0.00, samples=2 00:11:16.255 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:11:16.255 lat (msec) : 2=0.02%, 4=0.03%, 10=2.13%, 20=30.65%, 50=67.17% 00:11:16.255 cpu : usr=2.89%, sys=7.98%, ctx=811, majf=0, minf=15 00:11:16.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:16.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.255 issued rwts: total=2785,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.255 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.255 job2: (groupid=0, jobs=1): err= 0: pid=77111: Mon Jul 15 20:27:37 2024 00:11:16.255 read: IOPS=2693, BW=10.5MiB/s (11.0MB/s)(10.6MiB/1003msec) 00:11:16.255 slat (usec): min=4, max=7291, avg=178.88, stdev=715.05 00:11:16.255 clat (usec): min=1124, max=41824, avg=22228.05, stdev=5210.69 00:11:16.255 lat (usec): min=2797, max=41844, avg=22406.93, stdev=5219.14 00:11:16.255 clat percentiles (usec): 00:11:16.255 | 1.00th=[ 3425], 5.00th=[13960], 10.00th=[14615], 20.00th=[17171], 00:11:16.255 | 30.00th=[21890], 40.00th=[22938], 50.00th=[23200], 60.00th=[23725], 00:11:16.255 | 70.00th=[24773], 80.00th=[25297], 90.00th=[26608], 95.00th=[28705], 00:11:16.255 | 99.00th=[35390], 99.50th=[39060], 99.90th=[41681], 99.95th=[41681], 00:11:16.255 | 99.99th=[41681] 00:11:16.255 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:11:16.255 slat (usec): min=4, max=7857, avg=160.54, stdev=682.13 00:11:16.255 clat (usec): min=11816, max=36463, avg=21648.46, stdev=5078.63 00:11:16.255 lat (usec): min=11874, max=36488, avg=21808.99, stdev=5081.26 00:11:16.255 clat percentiles (usec): 00:11:16.255 | 1.00th=[12256], 5.00th=[12649], 10.00th=[13173], 20.00th=[15926], 00:11:16.255 | 30.00th=[20055], 40.00th=[21365], 50.00th=[22676], 60.00th=[23462], 00:11:16.255 | 70.00th=[23987], 80.00th=[25035], 90.00th=[28443], 95.00th=[30016], 00:11:16.255 | 99.00th=[31851], 99.50th=[33817], 99.90th=[36439], 99.95th=[36439], 00:11:16.255 | 99.99th=[36439] 00:11:16.255 bw ( KiB/s): min=12288, max=12288, per=17.79%, avg=12288.00, stdev= 0.00, samples=2 00:11:16.255 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:11:16.255 lat (msec) : 2=0.02%, 4=0.47%, 10=0.55%, 20=26.34%, 50=72.62% 00:11:16.255 cpu : usr=2.79%, sys=9.08%, ctx=763, majf=0, minf=10 00:11:16.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:16.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.255 issued rwts: total=2702,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.255 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.255 job3: (groupid=0, jobs=1): err= 0: pid=77112: Mon Jul 15 20:27:37 2024 00:11:16.255 read: IOPS=4823, BW=18.8MiB/s (19.8MB/s)(18.9MiB/1002msec) 00:11:16.255 slat (usec): min=5, max=3208, avg=97.67, stdev=446.96 00:11:16.255 clat (usec): min=599, max=15668, avg=12936.29, stdev=1296.01 00:11:16.255 lat (usec): min=3300, max=15684, avg=13033.96, stdev=1231.76 00:11:16.255 clat percentiles (usec): 00:11:16.255 | 1.00th=[ 6783], 5.00th=[10814], 10.00th=[11731], 20.00th=[12649], 00:11:16.255 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:11:16.255 | 70.00th=[13304], 80.00th=[13566], 90.00th=[14091], 95.00th=[14484], 00:11:16.255 | 99.00th=[14877], 99.50th=[15008], 99.90th=[15664], 99.95th=[15664], 00:11:16.255 | 99.99th=[15664] 00:11:16.255 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:11:16.255 slat (usec): min=9, max=3195, avg=95.51, stdev=420.04 00:11:16.255 clat (usec): min=9142, max=15554, avg=12491.89, stdev=1346.11 00:11:16.255 lat (usec): min=9526, max=15577, avg=12587.40, stdev=1340.83 00:11:16.255 clat percentiles (usec): 00:11:16.255 | 1.00th=[10159], 5.00th=[10683], 10.00th=[10945], 20.00th=[11076], 00:11:16.255 | 30.00th=[11338], 40.00th=[11600], 50.00th=[12518], 60.00th=[13173], 00:11:16.255 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14222], 95.00th=[14615], 00:11:16.255 | 99.00th=[15270], 99.50th=[15401], 99.90th=[15533], 99.95th=[15533], 00:11:16.255 | 99.99th=[15533] 00:11:16.255 bw ( KiB/s): min=20439, max=20439, per=29.60%, avg=20439.00, stdev= 0.00, samples=1 00:11:16.255 iops : min= 5109, max= 5109, avg=5109.00, stdev= 0.00, samples=1 00:11:16.255 lat (usec) : 750=0.01% 00:11:16.255 lat (msec) : 4=0.31%, 10=0.86%, 20=98.81% 00:11:16.255 cpu : usr=4.90%, sys=13.49%, ctx=499, majf=0, minf=5 00:11:16.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:16.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:16.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:16.255 issued rwts: total=4833,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:16.255 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:16.255 00:11:16.255 Run status group 0 (all jobs): 00:11:16.255 READ: bw=62.1MiB/s (65.1MB/s), 10.5MiB/s-22.0MiB/s (11.0MB/s-23.0MB/s), io=62.3MiB (65.3MB), run=1001-1003msec 00:11:16.255 WRITE: bw=67.4MiB/s (70.7MB/s), 12.0MiB/s-23.6MiB/s (12.5MB/s-24.8MB/s), io=67.6MiB (70.9MB), run=1001-1003msec 00:11:16.255 00:11:16.255 Disk stats (read/write): 00:11:16.255 nvme0n1: ios=4939/5120, merge=0/0, ticks=25172/22188, in_queue=47360, util=88.06% 00:11:16.255 nvme0n2: ios=2540/2560, merge=0/0, ticks=14138/11726, in_queue=25864, util=87.83% 00:11:16.255 nvme0n3: ios=2427/2560, merge=0/0, ticks=13055/11608, in_queue=24663, util=88.95% 00:11:16.255 nvme0n4: ios=4096/4471, merge=0/0, ticks=12279/12045, in_queue=24324, util=89.81% 00:11:16.255 20:27:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:16.255 [global] 00:11:16.255 thread=1 00:11:16.255 invalidate=1 00:11:16.255 rw=randwrite 00:11:16.255 time_based=1 00:11:16.255 runtime=1 00:11:16.255 ioengine=libaio 00:11:16.255 direct=1 00:11:16.255 bs=4096 00:11:16.255 iodepth=128 00:11:16.255 norandommap=0 00:11:16.255 numjobs=1 00:11:16.255 00:11:16.255 verify_dump=1 00:11:16.255 verify_backlog=512 00:11:16.255 verify_state_save=0 00:11:16.255 do_verify=1 00:11:16.255 verify=crc32c-intel 00:11:16.255 [job0] 00:11:16.255 filename=/dev/nvme0n1 00:11:16.255 [job1] 00:11:16.255 filename=/dev/nvme0n2 00:11:16.255 [job2] 00:11:16.255 filename=/dev/nvme0n3 00:11:16.255 [job3] 00:11:16.255 filename=/dev/nvme0n4 00:11:16.255 Could not set queue depth (nvme0n1) 00:11:16.255 Could not set queue depth (nvme0n2) 00:11:16.255 Could not set queue depth (nvme0n3) 00:11:16.255 Could not set queue depth (nvme0n4) 00:11:16.255 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.255 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.255 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.255 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:16.255 fio-3.35 00:11:16.255 Starting 4 threads 00:11:17.189 00:11:17.189 job0: (groupid=0, jobs=1): err= 0: pid=77169: Mon Jul 15 20:27:38 2024 00:11:17.189 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:11:17.189 slat (usec): min=3, max=11021, avg=167.51, stdev=854.16 00:11:17.189 clat (usec): min=12970, max=32842, avg=21042.38, stdev=3156.35 00:11:17.189 lat (usec): min=12990, max=33514, avg=21209.89, stdev=3231.20 00:11:17.189 clat percentiles (usec): 00:11:17.189 | 1.00th=[13960], 5.00th=[15533], 10.00th=[17171], 20.00th=[18744], 00:11:17.189 | 30.00th=[19268], 40.00th=[20579], 50.00th=[20841], 60.00th=[21627], 00:11:17.189 | 70.00th=[22414], 80.00th=[23462], 90.00th=[25035], 95.00th=[26608], 00:11:17.189 | 99.00th=[29754], 99.50th=[30540], 99.90th=[31065], 99.95th=[31851], 00:11:17.189 | 99.99th=[32900] 00:11:17.189 write: IOPS=3251, BW=12.7MiB/s (13.3MB/s)(12.8MiB/1011msec); 0 zone resets 00:11:17.189 slat (usec): min=4, max=9184, avg=140.25, stdev=671.19 00:11:17.189 clat (usec): min=8784, max=30553, avg=19283.92, stdev=3023.30 00:11:17.189 lat (usec): min=8809, max=30571, avg=19424.16, stdev=3088.63 00:11:17.189 clat percentiles (usec): 00:11:17.189 | 1.00th=[ 9241], 5.00th=[14615], 10.00th=[15926], 20.00th=[17171], 00:11:17.189 | 30.00th=[17957], 40.00th=[18744], 50.00th=[19792], 60.00th=[20317], 00:11:17.189 | 70.00th=[20841], 80.00th=[21365], 90.00th=[21890], 95.00th=[23462], 00:11:17.189 | 99.00th=[27132], 99.50th=[29492], 99.90th=[30540], 99.95th=[30540], 00:11:17.189 | 99.99th=[30540] 00:11:17.189 bw ( KiB/s): min=12455, max=12825, per=24.56%, avg=12640.00, stdev=261.63, samples=2 00:11:17.189 iops : min= 3113, max= 3206, avg=3159.50, stdev=65.76, samples=2 00:11:17.189 lat (msec) : 10=0.90%, 20=43.45%, 50=55.65% 00:11:17.189 cpu : usr=2.57%, sys=8.42%, ctx=856, majf=0, minf=7 00:11:17.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:17.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.189 issued rwts: total=3072,3287,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.189 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.189 job1: (groupid=0, jobs=1): err= 0: pid=77171: Mon Jul 15 20:27:38 2024 00:11:17.189 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:11:17.189 slat (usec): min=2, max=7700, avg=160.90, stdev=745.57 00:11:17.189 clat (usec): min=14507, max=30933, avg=20310.36, stdev=2458.21 00:11:17.189 lat (usec): min=14517, max=30992, avg=20471.26, stdev=2549.17 00:11:17.189 clat percentiles (usec): 00:11:17.189 | 1.00th=[15008], 5.00th=[16450], 10.00th=[17171], 20.00th=[18482], 00:11:17.189 | 30.00th=[19006], 40.00th=[19530], 50.00th=[20317], 60.00th=[20579], 00:11:17.189 | 70.00th=[21103], 80.00th=[22414], 90.00th=[23462], 95.00th=[25035], 00:11:17.189 | 99.00th=[26870], 99.50th=[27132], 99.90th=[28705], 99.95th=[29492], 00:11:17.189 | 99.99th=[31065] 00:11:17.189 write: IOPS=3193, BW=12.5MiB/s (13.1MB/s)(12.6MiB/1007msec); 0 zone resets 00:11:17.189 slat (usec): min=4, max=11415, avg=149.97, stdev=730.38 00:11:17.189 clat (usec): min=6208, max=30118, avg=19855.75, stdev=2654.98 00:11:17.189 lat (usec): min=10152, max=30145, avg=20005.71, stdev=2715.75 00:11:17.189 clat percentiles (usec): 00:11:17.189 | 1.00th=[12518], 5.00th=[14746], 10.00th=[16909], 20.00th=[18220], 00:11:17.189 | 30.00th=[18744], 40.00th=[19792], 50.00th=[20317], 60.00th=[20579], 00:11:17.189 | 70.00th=[20841], 80.00th=[21627], 90.00th=[22152], 95.00th=[24249], 00:11:17.189 | 99.00th=[27395], 99.50th=[28443], 99.90th=[29230], 99.95th=[29230], 00:11:17.189 | 99.99th=[30016] 00:11:17.189 bw ( KiB/s): min=12288, max=12424, per=24.01%, avg=12356.00, stdev=96.17, samples=2 00:11:17.189 iops : min= 3072, max= 3106, avg=3089.00, stdev=24.04, samples=2 00:11:17.189 lat (msec) : 10=0.02%, 20=43.54%, 50=56.44% 00:11:17.189 cpu : usr=2.58%, sys=8.35%, ctx=925, majf=0, minf=17 00:11:17.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:17.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.189 issued rwts: total=3072,3216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.189 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.189 job2: (groupid=0, jobs=1): err= 0: pid=77176: Mon Jul 15 20:27:38 2024 00:11:17.189 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:11:17.189 slat (usec): min=3, max=9548, avg=163.20, stdev=781.29 00:11:17.189 clat (usec): min=12172, max=32205, avg=20704.29, stdev=2873.45 00:11:17.189 lat (usec): min=12200, max=32375, avg=20867.49, stdev=2945.25 00:11:17.189 clat percentiles (usec): 00:11:17.189 | 1.00th=[13960], 5.00th=[15664], 10.00th=[17171], 20.00th=[18744], 00:11:17.189 | 30.00th=[19530], 40.00th=[20055], 50.00th=[20579], 60.00th=[20841], 00:11:17.189 | 70.00th=[21627], 80.00th=[23200], 90.00th=[24249], 95.00th=[25822], 00:11:17.189 | 99.00th=[28705], 99.50th=[29230], 99.90th=[29754], 99.95th=[30540], 00:11:17.189 | 99.99th=[32113] 00:11:17.189 write: IOPS=3265, BW=12.8MiB/s (13.4MB/s)(12.9MiB/1011msec); 0 zone resets 00:11:17.189 slat (usec): min=4, max=9291, avg=143.83, stdev=671.75 00:11:17.189 clat (usec): min=3294, max=29942, avg=19535.35, stdev=2982.05 00:11:17.189 lat (usec): min=3310, max=29951, avg=19679.18, stdev=3028.26 00:11:17.189 clat percentiles (usec): 00:11:17.189 | 1.00th=[10945], 5.00th=[13960], 10.00th=[15401], 20.00th=[17695], 00:11:17.189 | 30.00th=[18482], 40.00th=[19268], 50.00th=[20317], 60.00th=[20579], 00:11:17.189 | 70.00th=[21103], 80.00th=[21627], 90.00th=[22676], 95.00th=[23725], 00:11:17.189 | 99.00th=[24511], 99.50th=[25297], 99.90th=[27919], 99.95th=[29230], 00:11:17.189 | 99.99th=[30016] 00:11:17.189 bw ( KiB/s): min=12648, max=12718, per=24.64%, avg=12683.00, stdev=49.50, samples=2 00:11:17.189 iops : min= 3162, max= 3179, avg=3170.50, stdev=12.02, samples=2 00:11:17.189 lat (msec) : 4=0.11%, 10=0.05%, 20=40.94%, 50=58.90% 00:11:17.189 cpu : usr=2.48%, sys=8.51%, ctx=997, majf=0, minf=9 00:11:17.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:17.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.189 issued rwts: total=3072,3301,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.189 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.189 job3: (groupid=0, jobs=1): err= 0: pid=77178: Mon Jul 15 20:27:38 2024 00:11:17.189 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:11:17.189 slat (usec): min=3, max=12446, avg=164.13, stdev=820.39 00:11:17.189 clat (usec): min=13338, max=33651, avg=20636.13, stdev=2622.17 00:11:17.189 lat (usec): min=13716, max=33692, avg=20800.26, stdev=2711.87 00:11:17.189 clat percentiles (usec): 00:11:17.189 | 1.00th=[15533], 5.00th=[16909], 10.00th=[17957], 20.00th=[18744], 00:11:17.189 | 30.00th=[19006], 40.00th=[19792], 50.00th=[20317], 60.00th=[20841], 00:11:17.189 | 70.00th=[21365], 80.00th=[21890], 90.00th=[24249], 95.00th=[25560], 00:11:17.189 | 99.00th=[30016], 99.50th=[30540], 99.90th=[33424], 99.95th=[33817], 00:11:17.189 | 99.99th=[33817] 00:11:17.189 write: IOPS=3181, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1007msec); 0 zone resets 00:11:17.189 slat (usec): min=4, max=8473, avg=147.15, stdev=648.23 00:11:17.189 clat (usec): min=6687, max=30152, avg=19780.17, stdev=2598.04 00:11:17.189 lat (usec): min=6699, max=30395, avg=19927.32, stdev=2651.16 00:11:17.189 clat percentiles (usec): 00:11:17.189 | 1.00th=[13042], 5.00th=[14484], 10.00th=[16319], 20.00th=[18482], 00:11:17.189 | 30.00th=[19006], 40.00th=[19792], 50.00th=[20055], 60.00th=[20579], 00:11:17.189 | 70.00th=[20841], 80.00th=[21365], 90.00th=[22152], 95.00th=[23462], 00:11:17.189 | 99.00th=[26608], 99.50th=[27657], 99.90th=[28181], 99.95th=[29492], 00:11:17.189 | 99.99th=[30278] 00:11:17.189 bw ( KiB/s): min=12312, max=12408, per=24.02%, avg=12360.00, stdev=67.88, samples=2 00:11:17.189 iops : min= 3078, max= 3102, avg=3090.00, stdev=16.97, samples=2 00:11:17.190 lat (msec) : 10=0.29%, 20=44.12%, 50=55.59% 00:11:17.190 cpu : usr=2.29%, sys=8.95%, ctx=878, majf=0, minf=13 00:11:17.190 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:17.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.190 issued rwts: total=3072,3204,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.190 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.190 00:11:17.190 Run status group 0 (all jobs): 00:11:17.190 READ: bw=47.5MiB/s (49.8MB/s), 11.9MiB/s-11.9MiB/s (12.4MB/s-12.5MB/s), io=48.0MiB (50.3MB), run=1007-1011msec 00:11:17.190 WRITE: bw=50.3MiB/s (52.7MB/s), 12.4MiB/s-12.8MiB/s (13.0MB/s-13.4MB/s), io=50.8MiB (53.3MB), run=1007-1011msec 00:11:17.190 00:11:17.190 Disk stats (read/write): 00:11:17.190 nvme0n1: ios=2610/2940, merge=0/0, ticks=25477/25541, in_queue=51018, util=88.88% 00:11:17.190 nvme0n2: ios=2609/2836, merge=0/0, ticks=24734/25721, in_queue=50455, util=87.69% 00:11:17.190 nvme0n3: ios=2591/2914, merge=0/0, ticks=25318/26040, in_queue=51358, util=89.87% 00:11:17.190 nvme0n4: ios=2560/2835, merge=0/0, ticks=24982/25508, in_queue=50490, util=88.54% 00:11:17.190 20:27:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:17.448 20:27:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=77192 00:11:17.448 20:27:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:17.448 20:27:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:17.448 [global] 00:11:17.448 thread=1 00:11:17.448 invalidate=1 00:11:17.448 rw=read 00:11:17.448 time_based=1 00:11:17.448 runtime=10 00:11:17.448 ioengine=libaio 00:11:17.448 direct=1 00:11:17.448 bs=4096 00:11:17.448 iodepth=1 00:11:17.448 norandommap=1 00:11:17.448 numjobs=1 00:11:17.448 00:11:17.448 [job0] 00:11:17.448 filename=/dev/nvme0n1 00:11:17.448 [job1] 00:11:17.448 filename=/dev/nvme0n2 00:11:17.448 [job2] 00:11:17.448 filename=/dev/nvme0n3 00:11:17.448 [job3] 00:11:17.448 filename=/dev/nvme0n4 00:11:17.448 Could not set queue depth (nvme0n1) 00:11:17.448 Could not set queue depth (nvme0n2) 00:11:17.448 Could not set queue depth (nvme0n3) 00:11:17.448 Could not set queue depth (nvme0n4) 00:11:17.448 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.448 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.448 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.448 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:17.448 fio-3.35 00:11:17.448 Starting 4 threads 00:11:20.728 20:27:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:20.728 fio: pid=77235, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:20.728 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=43827200, buflen=4096 00:11:20.728 20:27:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:20.984 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=65216512, buflen=4096 00:11:20.984 fio: pid=77234, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:20.984 20:27:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:20.984 20:27:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:21.241 fio: pid=77232, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:21.241 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=51351552, buflen=4096 00:11:21.241 20:27:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:21.241 20:27:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:21.498 fio: pid=77233, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:21.498 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=11427840, buflen=4096 00:11:21.498 00:11:21.498 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77232: Mon Jul 15 20:27:42 2024 00:11:21.498 read: IOPS=3499, BW=13.7MiB/s (14.3MB/s)(49.0MiB/3583msec) 00:11:21.498 slat (usec): min=11, max=18423, avg=21.24, stdev=242.18 00:11:21.498 clat (usec): min=134, max=3548, avg=262.76, stdev=85.56 00:11:21.498 lat (usec): min=150, max=18637, avg=284.00, stdev=256.98 00:11:21.498 clat percentiles (usec): 00:11:21.498 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 172], 00:11:21.498 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:11:21.498 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 318], 00:11:21.498 | 99.00th=[ 396], 99.50th=[ 433], 99.90th=[ 791], 99.95th=[ 1844], 00:11:21.498 | 99.99th=[ 3359] 00:11:21.498 bw ( KiB/s): min=12592, max=12920, per=21.31%, avg=12758.67, stdev=141.16, samples=6 00:11:21.498 iops : min= 3148, max= 3230, avg=3189.67, stdev=35.29, samples=6 00:11:21.498 lat (usec) : 250=25.50%, 500=74.27%, 750=0.12%, 1000=0.03% 00:11:21.498 lat (msec) : 2=0.03%, 4=0.04% 00:11:21.498 cpu : usr=1.17%, sys=5.19%, ctx=12546, majf=0, minf=1 00:11:21.498 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.498 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.498 issued rwts: total=12538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.498 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.498 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77233: Mon Jul 15 20:27:42 2024 00:11:21.498 read: IOPS=4920, BW=19.2MiB/s (20.2MB/s)(74.9MiB/3897msec) 00:11:21.498 slat (usec): min=11, max=13894, avg=21.11, stdev=189.70 00:11:21.498 clat (usec): min=35, max=4666, avg=180.43, stdev=84.91 00:11:21.498 lat (usec): min=147, max=14085, avg=201.54, stdev=209.06 00:11:21.498 clat percentiles (usec): 00:11:21.498 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:11:21.498 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:11:21.498 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 262], 95.00th=[ 285], 00:11:21.498 | 99.00th=[ 318], 99.50th=[ 355], 99.90th=[ 906], 99.95th=[ 2040], 00:11:21.498 | 99.99th=[ 3884] 00:11:21.498 bw ( KiB/s): min=12896, max=22064, per=32.49%, avg=19452.14, stdev=3261.68, samples=7 00:11:21.498 iops : min= 3224, max= 5516, avg=4863.00, stdev=815.42, samples=7 00:11:21.498 lat (usec) : 50=0.01%, 250=88.77%, 500=10.95%, 750=0.13%, 1000=0.05% 00:11:21.498 lat (msec) : 2=0.04%, 4=0.05%, 10=0.01% 00:11:21.498 cpu : usr=1.39%, sys=7.44%, ctx=19187, majf=0, minf=1 00:11:21.498 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.498 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.498 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.498 issued rwts: total=19175,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.498 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.498 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77234: Mon Jul 15 20:27:42 2024 00:11:21.498 read: IOPS=4864, BW=19.0MiB/s (19.9MB/s)(62.2MiB/3273msec) 00:11:21.498 slat (usec): min=12, max=7805, avg=16.43, stdev=80.27 00:11:21.498 clat (usec): min=3, max=875, avg=187.50, stdev=27.09 00:11:21.498 lat (usec): min=162, max=8015, avg=203.93, stdev=85.02 00:11:21.498 clat percentiles (usec): 00:11:21.499 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:11:21.499 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 188], 00:11:21.499 | 70.00th=[ 194], 80.00th=[ 204], 90.00th=[ 217], 95.00th=[ 231], 00:11:21.499 | 99.00th=[ 262], 99.50th=[ 302], 99.90th=[ 461], 99.95th=[ 506], 00:11:21.499 | 99.99th=[ 775] 00:11:21.499 bw ( KiB/s): min=17576, max=20736, per=32.51%, avg=19465.33, stdev=1235.58, samples=6 00:11:21.499 iops : min= 4394, max= 5184, avg=4866.33, stdev=308.90, samples=6 00:11:21.499 lat (usec) : 4=0.01%, 100=0.01%, 250=98.24%, 500=1.70%, 750=0.04% 00:11:21.499 lat (usec) : 1000=0.01% 00:11:21.499 cpu : usr=1.50%, sys=6.51%, ctx=15928, majf=0, minf=1 00:11:21.499 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.499 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.499 issued rwts: total=15923,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.499 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.499 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77235: Mon Jul 15 20:27:42 2024 00:11:21.499 read: IOPS=3592, BW=14.0MiB/s (14.7MB/s)(41.8MiB/2979msec) 00:11:21.499 slat (nsec): min=10454, max=88673, avg=17056.80, stdev=5625.72 00:11:21.499 clat (usec): min=116, max=2434, avg=259.54, stdev=66.34 00:11:21.499 lat (usec): min=170, max=2451, avg=276.60, stdev=64.53 00:11:21.499 clat percentiles (usec): 00:11:21.499 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 186], 00:11:21.499 | 30.00th=[ 204], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 289], 00:11:21.499 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 318], 00:11:21.499 | 99.00th=[ 359], 99.50th=[ 388], 99.90th=[ 570], 99.95th=[ 775], 00:11:21.499 | 99.99th=[ 2212] 00:11:21.499 bw ( KiB/s): min=12816, max=19304, per=24.51%, avg=14675.20, stdev=2817.58, samples=5 00:11:21.499 iops : min= 3204, max= 4826, avg=3668.80, stdev=704.40, samples=5 00:11:21.499 lat (usec) : 250=31.89%, 500=67.98%, 750=0.04%, 1000=0.04% 00:11:21.499 lat (msec) : 2=0.02%, 4=0.02% 00:11:21.499 cpu : usr=1.21%, sys=5.41%, ctx=10722, majf=0, minf=1 00:11:21.499 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:21.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.499 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:21.499 issued rwts: total=10701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:21.499 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:21.499 00:11:21.499 Run status group 0 (all jobs): 00:11:21.499 READ: bw=58.5MiB/s (61.3MB/s), 13.7MiB/s-19.2MiB/s (14.3MB/s-20.2MB/s), io=228MiB (239MB), run=2979-3897msec 00:11:21.499 00:11:21.499 Disk stats (read/write): 00:11:21.499 nvme0n1: ios=11355/0, merge=0/0, ticks=3116/0, in_queue=3116, util=94.94% 00:11:21.499 nvme0n2: ios=19043/0, merge=0/0, ticks=3548/0, in_queue=3548, util=95.36% 00:11:21.499 nvme0n3: ios=15113/0, merge=0/0, ticks=2897/0, in_queue=2897, util=96.46% 00:11:21.499 nvme0n4: ios=10356/0, merge=0/0, ticks=2694/0, in_queue=2694, util=96.80% 00:11:21.499 20:27:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:21.499 20:27:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:21.756 20:27:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:21.756 20:27:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:22.012 20:27:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:22.012 20:27:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:22.575 20:27:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:22.575 20:27:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:22.833 20:27:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:22.833 20:27:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:23.090 20:27:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:23.090 20:27:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 77192 00:11:23.090 20:27:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:23.090 20:27:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:23.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.090 20:27:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:23.090 20:27:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:23.090 20:27:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:23.090 20:27:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.090 20:27:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:23.090 20:27:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.090 20:27:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:23.090 20:27:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:23.090 nvmf hotplug test: fio failed as expected 00:11:23.090 20:27:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:23.090 20:27:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:23.397 rmmod nvme_tcp 00:11:23.397 rmmod nvme_fabrics 00:11:23.397 rmmod nvme_keyring 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 76697 ']' 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 76697 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 76697 ']' 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 76697 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76697 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:23.397 killing process with pid 76697 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76697' 00:11:23.397 20:27:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 76697 00:11:23.398 20:27:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 76697 00:11:23.700 20:27:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:23.700 20:27:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:23.700 20:27:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:23.700 20:27:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:23.700 20:27:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:23.700 20:27:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.700 20:27:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:23.700 20:27:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.700 20:27:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:23.700 ************************************ 00:11:23.700 END TEST nvmf_fio_target 00:11:23.700 ************************************ 00:11:23.700 00:11:23.700 real 0m20.004s 00:11:23.700 user 1m16.869s 00:11:23.700 sys 0m9.197s 00:11:23.700 20:27:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:23.700 20:27:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.700 20:27:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:23.700 20:27:45 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:23.700 20:27:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:23.700 20:27:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:23.700 20:27:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:23.700 ************************************ 00:11:23.700 START TEST nvmf_bdevio 00:11:23.700 ************************************ 00:11:23.700 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:23.701 * Looking for test storage... 00:11:23.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:23.701 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:23.959 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:23.959 Cannot find device "nvmf_tgt_br" 00:11:23.959 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:11:23.959 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:23.959 Cannot find device "nvmf_tgt_br2" 00:11:23.959 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:11:23.959 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:23.959 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:23.959 Cannot find device "nvmf_tgt_br" 00:11:23.959 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:11:23.959 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:23.959 Cannot find device "nvmf_tgt_br2" 00:11:23.959 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:11:23.959 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:23.959 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:23.959 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:23.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:23.959 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:23.959 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:23.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:23.959 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:23.960 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:23.960 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:23.960 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:23.960 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:23.960 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:23.960 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:23.960 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:23.960 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:23.960 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:23.960 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:23.960 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:23.960 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:23.960 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:23.960 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:23.960 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:23.960 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:23.960 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:23.960 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:23.960 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:24.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:11:24.218 00:11:24.218 --- 10.0.0.2 ping statistics --- 00:11:24.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.218 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:24.218 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:24.218 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:11:24.218 00:11:24.218 --- 10.0.0.3 ping statistics --- 00:11:24.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.218 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:24.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:11:24.218 00:11:24.218 --- 10.0.0.1 ping statistics --- 00:11:24.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.218 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=77562 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 77562 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 77562 ']' 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:24.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:24.218 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.218 [2024-07-15 20:27:45.600610] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:11:24.218 [2024-07-15 20:27:45.600724] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.476 [2024-07-15 20:27:45.741101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:24.476 [2024-07-15 20:27:45.799150] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.476 [2024-07-15 20:27:45.799208] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.476 [2024-07-15 20:27:45.799220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.476 [2024-07-15 20:27:45.799228] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.476 [2024-07-15 20:27:45.799235] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.476 [2024-07-15 20:27:45.799333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:24.476 [2024-07-15 20:27:45.799387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:24.476 [2024-07-15 20:27:45.799551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:24.476 [2024-07-15 20:27:45.799552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.476 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:24.476 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:11:24.476 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:24.476 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:24.476 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.476 20:27:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.476 20:27:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:24.476 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.476 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.476 [2024-07-15 20:27:45.935568] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.476 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.476 20:27:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:24.476 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.476 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.734 Malloc0 00:11:24.734 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.734 20:27:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:24.734 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.734 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.734 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.734 20:27:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:24.734 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.734 20:27:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.734 20:27:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.734 20:27:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.734 20:27:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.734 20:27:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:24.734 [2024-07-15 20:27:46.010525] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.734 20:27:46 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.734 20:27:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:24.734 20:27:46 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:24.734 20:27:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:24.734 20:27:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:24.734 20:27:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:24.734 20:27:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:24.734 { 00:11:24.734 "params": { 00:11:24.735 "name": "Nvme$subsystem", 00:11:24.735 "trtype": "$TEST_TRANSPORT", 00:11:24.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.735 "adrfam": "ipv4", 00:11:24.735 "trsvcid": "$NVMF_PORT", 00:11:24.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.735 "hdgst": ${hdgst:-false}, 00:11:24.735 "ddgst": ${ddgst:-false} 00:11:24.735 }, 00:11:24.735 "method": "bdev_nvme_attach_controller" 00:11:24.735 } 00:11:24.735 EOF 00:11:24.735 )") 00:11:24.735 20:27:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:24.735 20:27:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:24.735 20:27:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:24.735 20:27:46 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:24.735 "params": { 00:11:24.735 "name": "Nvme1", 00:11:24.735 "trtype": "tcp", 00:11:24.735 "traddr": "10.0.0.2", 00:11:24.735 "adrfam": "ipv4", 00:11:24.735 "trsvcid": "4420", 00:11:24.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:24.735 "hdgst": false, 00:11:24.735 "ddgst": false 00:11:24.735 }, 00:11:24.735 "method": "bdev_nvme_attach_controller" 00:11:24.735 }' 00:11:24.735 [2024-07-15 20:27:46.069567] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:11:24.735 [2024-07-15 20:27:46.069668] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77603 ] 00:11:24.735 [2024-07-15 20:27:46.209377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:25.002 [2024-07-15 20:27:46.280668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.002 [2024-07-15 20:27:46.280788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.002 [2024-07-15 20:27:46.280793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.002 I/O targets: 00:11:25.002 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:25.002 00:11:25.002 00:11:25.002 CUnit - A unit testing framework for C - Version 2.1-3 00:11:25.002 http://cunit.sourceforge.net/ 00:11:25.002 00:11:25.002 00:11:25.002 Suite: bdevio tests on: Nvme1n1 00:11:25.002 Test: blockdev write read block ...passed 00:11:25.262 Test: blockdev write zeroes read block ...passed 00:11:25.262 Test: blockdev write zeroes read no split ...passed 00:11:25.262 Test: blockdev write zeroes read split ...passed 00:11:25.262 Test: blockdev write zeroes read split partial ...passed 00:11:25.262 Test: blockdev reset ...[2024-07-15 20:27:46.542325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:25.262 [2024-07-15 20:27:46.542457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae7180 (9): Bad file descriptor 00:11:25.262 [2024-07-15 20:27:46.556877] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:25.262 passed 00:11:25.262 Test: blockdev write read 8 blocks ...passed 00:11:25.262 Test: blockdev write read size > 128k ...passed 00:11:25.262 Test: blockdev write read invalid size ...passed 00:11:25.262 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:25.262 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:25.262 Test: blockdev write read max offset ...passed 00:11:25.262 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:25.262 Test: blockdev writev readv 8 blocks ...passed 00:11:25.262 Test: blockdev writev readv 30 x 1block ...passed 00:11:25.262 Test: blockdev writev readv block ...passed 00:11:25.262 Test: blockdev writev readv size > 128k ...passed 00:11:25.262 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:25.262 Test: blockdev comparev and writev ...[2024-07-15 20:27:46.730961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.262 [2024-07-15 20:27:46.731022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:25.262 [2024-07-15 20:27:46.731045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.262 [2024-07-15 20:27:46.731057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:25.262 [2024-07-15 20:27:46.731376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.262 [2024-07-15 20:27:46.731492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:25.262 [2024-07-15 20:27:46.731514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.262 [2024-07-15 20:27:46.731525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:25.262 [2024-07-15 20:27:46.732018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.262 [2024-07-15 20:27:46.732047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:25.262 [2024-07-15 20:27:46.732065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.262 [2024-07-15 20:27:46.732078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:25.262 [2024-07-15 20:27:46.732425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.262 [2024-07-15 20:27:46.732459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:25.262 [2024-07-15 20:27:46.732544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:25.262 [2024-07-15 20:27:46.732556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:25.532 passed 00:11:25.532 Test: blockdev nvme passthru rw ...passed 00:11:25.532 Test: blockdev nvme passthru vendor specific ...[2024-07-15 20:27:46.817292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:25.532 [2024-07-15 20:27:46.817431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:25.532 [2024-07-15 20:27:46.817627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:25.532 [2024-07-15 20:27:46.817909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:25.532 [2024-07-15 20:27:46.818165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:25.532 [2024-07-15 20:27:46.818198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:25.532 [2024-07-15 20:27:46.818423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:25.532 [2024-07-15 20:27:46.818521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:25.532 passed 00:11:25.532 Test: blockdev nvme admin passthru ...passed 00:11:25.532 Test: blockdev copy ...passed 00:11:25.532 00:11:25.532 Run Summary: Type Total Ran Passed Failed Inactive 00:11:25.532 suites 1 1 n/a 0 0 00:11:25.532 tests 23 23 23 0 0 00:11:25.532 asserts 152 152 152 0 n/a 00:11:25.532 00:11:25.532 Elapsed time = 0.887 seconds 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:25.791 rmmod nvme_tcp 00:11:25.791 rmmod nvme_fabrics 00:11:25.791 rmmod nvme_keyring 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 77562 ']' 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 77562 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 77562 ']' 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 77562 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77562 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77562' 00:11:25.791 killing process with pid 77562 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 77562 00:11:25.791 20:27:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 77562 00:11:26.064 20:27:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:26.064 20:27:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:26.064 20:27:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:26.064 20:27:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:26.064 20:27:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:26.064 20:27:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.064 20:27:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.064 20:27:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.064 20:27:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:26.064 00:11:26.064 real 0m2.283s 00:11:26.064 user 0m7.786s 00:11:26.064 sys 0m0.627s 00:11:26.064 20:27:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:26.064 20:27:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:26.064 ************************************ 00:11:26.064 END TEST nvmf_bdevio 00:11:26.064 ************************************ 00:11:26.064 20:27:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:26.064 20:27:47 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:26.064 20:27:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:26.064 20:27:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.064 20:27:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:26.064 ************************************ 00:11:26.064 START TEST nvmf_auth_target 00:11:26.064 ************************************ 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:26.064 * Looking for test storage... 00:11:26.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:26.064 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:26.065 Cannot find device "nvmf_tgt_br" 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:11:26.065 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:26.329 Cannot find device "nvmf_tgt_br2" 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:26.329 Cannot find device "nvmf_tgt_br" 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:26.329 Cannot find device "nvmf_tgt_br2" 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:26.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:26.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:26.329 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:26.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:11:26.597 00:11:26.597 --- 10.0.0.2 ping statistics --- 00:11:26.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.597 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:26.597 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:26.597 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:11:26.597 00:11:26.597 --- 10.0.0.3 ping statistics --- 00:11:26.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.597 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:26.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:11:26.597 00:11:26.597 --- 10.0.0.1 ping statistics --- 00:11:26.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.597 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=77783 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 77783 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77783 ']' 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:26.597 20:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=77808 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3a7eac545cc992ead0ba0426cc23bbd99fe3da53a29fe70b 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.tAh 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3a7eac545cc992ead0ba0426cc23bbd99fe3da53a29fe70b 0 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3a7eac545cc992ead0ba0426cc23bbd99fe3da53a29fe70b 0 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3a7eac545cc992ead0ba0426cc23bbd99fe3da53a29fe70b 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.tAh 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.tAh 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.tAh 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5e9f5accfc73bfe0c2446b015a73e7731657f73a442e8b516662d347b49ed208 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.FU4 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5e9f5accfc73bfe0c2446b015a73e7731657f73a442e8b516662d347b49ed208 3 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5e9f5accfc73bfe0c2446b015a73e7731657f73a442e8b516662d347b49ed208 3 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5e9f5accfc73bfe0c2446b015a73e7731657f73a442e8b516662d347b49ed208 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:11:26.856 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.FU4 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.FU4 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.FU4 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7cf80e7d41b065c775376f19a0bb3d23 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.CLP 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7cf80e7d41b065c775376f19a0bb3d23 1 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7cf80e7d41b065c775376f19a0bb3d23 1 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7cf80e7d41b065c775376f19a0bb3d23 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.CLP 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.CLP 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.CLP 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=55d0b73d40bb095a822425241d6762703897931be1f57d67 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:11:27.114 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ufv 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 55d0b73d40bb095a822425241d6762703897931be1f57d67 2 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 55d0b73d40bb095a822425241d6762703897931be1f57d67 2 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=55d0b73d40bb095a822425241d6762703897931be1f57d67 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ufv 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ufv 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.ufv 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cf6a5c109fcd90d61dd590411c64fa6c9b4eccefbeff0c80 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.17D 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cf6a5c109fcd90d61dd590411c64fa6c9b4eccefbeff0c80 2 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cf6a5c109fcd90d61dd590411c64fa6c9b4eccefbeff0c80 2 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cf6a5c109fcd90d61dd590411c64fa6c9b4eccefbeff0c80 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.17D 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.17D 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.17D 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=05bcd61120272eab4f8a68a4c642b030 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.qKb 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 05bcd61120272eab4f8a68a4c642b030 1 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 05bcd61120272eab4f8a68a4c642b030 1 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=05bcd61120272eab4f8a68a4c642b030 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:11:27.115 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.qKb 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.qKb 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.qKb 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=758054f3e46ddbaba917537393c84f6ce417caaccff4d57a878c4f34b1799663 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.xrd 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 758054f3e46ddbaba917537393c84f6ce417caaccff4d57a878c4f34b1799663 3 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 758054f3e46ddbaba917537393c84f6ce417caaccff4d57a878c4f34b1799663 3 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=758054f3e46ddbaba917537393c84f6ce417caaccff4d57a878c4f34b1799663 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.xrd 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.xrd 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.xrd 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 77783 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77783 ']' 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:27.374 20:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.631 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:27.631 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:27.631 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 77808 /var/tmp/host.sock 00:11:27.631 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 77808 ']' 00:11:27.631 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:27.631 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:27.631 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:27.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:27.631 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:27.631 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.890 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:27.890 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:11:27.890 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:11:27.890 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.890 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.890 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.890 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:27.890 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tAh 00:11:27.890 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.890 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.890 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.890 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.tAh 00:11:27.890 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.tAh 00:11:28.148 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.FU4 ]] 00:11:28.148 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FU4 00:11:28.148 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.148 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.148 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.148 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FU4 00:11:28.148 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FU4 00:11:28.406 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:28.406 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.CLP 00:11:28.406 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.406 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.406 20:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.406 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.CLP 00:11:28.406 20:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.CLP 00:11:28.973 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.ufv ]] 00:11:28.973 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ufv 00:11:28.973 20:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.973 20:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.973 20:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.973 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ufv 00:11:28.973 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ufv 00:11:28.973 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:28.973 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.17D 00:11:28.973 20:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.973 20:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.973 20:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.973 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.17D 00:11:28.973 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.17D 00:11:29.231 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.qKb ]] 00:11:29.231 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qKb 00:11:29.231 20:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.231 20:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.489 20:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.489 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qKb 00:11:29.489 20:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qKb 00:11:29.748 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:11:29.748 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.xrd 00:11:29.748 20:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.748 20:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.748 20:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.748 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.xrd 00:11:29.748 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.xrd 00:11:30.006 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:11:30.006 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:30.006 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:30.006 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:30.006 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:30.006 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:30.265 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:11:30.265 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:30.265 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:30.265 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:30.265 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:30.265 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.265 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.265 20:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.265 20:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.265 20:27:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.265 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.265 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.524 00:11:30.524 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:30.524 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:30.524 20:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.782 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.782 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.782 20:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.782 20:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.782 20:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.782 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:30.782 { 00:11:30.782 "auth": { 00:11:30.782 "dhgroup": "null", 00:11:30.782 "digest": "sha256", 00:11:30.782 "state": "completed" 00:11:30.782 }, 00:11:30.782 "cntlid": 1, 00:11:30.782 "listen_address": { 00:11:30.782 "adrfam": "IPv4", 00:11:30.782 "traddr": "10.0.0.2", 00:11:30.782 "trsvcid": "4420", 00:11:30.782 "trtype": "TCP" 00:11:30.782 }, 00:11:30.782 "peer_address": { 00:11:30.782 "adrfam": "IPv4", 00:11:30.782 "traddr": "10.0.0.1", 00:11:30.782 "trsvcid": "59946", 00:11:30.782 "trtype": "TCP" 00:11:30.782 }, 00:11:30.782 "qid": 0, 00:11:30.782 "state": "enabled", 00:11:30.782 "thread": "nvmf_tgt_poll_group_000" 00:11:30.782 } 00:11:30.782 ]' 00:11:30.782 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:31.041 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:31.041 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:31.041 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:31.041 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:31.041 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.041 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.041 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.299 20:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:00:M2E3ZWFjNTQ1Y2M5OTJlYWQwYmEwNDI2Y2MyM2JiZDk5ZmUzZGE1M2EyOWZlNzBix8QYQA==: --dhchap-ctrl-secret DHHC-1:03:NWU5ZjVhY2NmYzczYmZlMGMyNDQ2YjAxNWE3M2U3NzMxNjU3ZjczYTQ0MmU4YjUxNjY2MmQzNDdiNDllZDIwOIo4Z0U=: 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.566 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:36.566 20:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.825 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.825 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.825 20:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.825 20:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.825 20:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.825 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:36.825 { 00:11:36.825 "auth": { 00:11:36.825 "dhgroup": "null", 00:11:36.825 "digest": "sha256", 00:11:36.825 "state": "completed" 00:11:36.825 }, 00:11:36.825 "cntlid": 3, 00:11:36.825 "listen_address": { 00:11:36.825 "adrfam": "IPv4", 00:11:36.825 "traddr": "10.0.0.2", 00:11:36.825 "trsvcid": "4420", 00:11:36.825 "trtype": "TCP" 00:11:36.825 }, 00:11:36.825 "peer_address": { 00:11:36.825 "adrfam": "IPv4", 00:11:36.825 "traddr": "10.0.0.1", 00:11:36.825 "trsvcid": "59968", 00:11:36.825 "trtype": "TCP" 00:11:36.825 }, 00:11:36.825 "qid": 0, 00:11:36.825 "state": "enabled", 00:11:36.825 "thread": "nvmf_tgt_poll_group_000" 00:11:36.825 } 00:11:36.825 ]' 00:11:36.825 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:37.084 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:37.084 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.084 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:37.084 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.084 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.084 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.084 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.343 20:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:01:N2NmODBlN2Q0MWIwNjVjNzc1Mzc2ZjE5YTBiYjNkMjOqK9Z2: --dhchap-ctrl-secret DHHC-1:02:NTVkMGI3M2Q0MGJiMDk1YTgyMjQyNTI0MWQ2NzYyNzAzODk3OTMxYmUxZjU3ZDY3F+CkRA==: 00:11:38.277 20:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.277 20:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:11:38.277 20:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.277 20:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.277 20:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.277 20:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:38.277 20:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:38.277 20:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:38.277 20:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:11:38.277 20:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.277 20:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:38.277 20:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:38.277 20:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:38.277 20:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.277 20:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.277 20:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.277 20:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.277 20:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.277 20:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.277 20:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.843 00:11:38.843 20:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:38.843 20:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.843 20:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:39.102 20:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.102 20:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.102 20:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.102 20:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.102 20:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.102 20:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:39.102 { 00:11:39.102 "auth": { 00:11:39.102 "dhgroup": "null", 00:11:39.102 "digest": "sha256", 00:11:39.102 "state": "completed" 00:11:39.102 }, 00:11:39.102 "cntlid": 5, 00:11:39.102 "listen_address": { 00:11:39.102 "adrfam": "IPv4", 00:11:39.102 "traddr": "10.0.0.2", 00:11:39.102 "trsvcid": "4420", 00:11:39.102 "trtype": "TCP" 00:11:39.102 }, 00:11:39.102 "peer_address": { 00:11:39.102 "adrfam": "IPv4", 00:11:39.102 "traddr": "10.0.0.1", 00:11:39.102 "trsvcid": "51324", 00:11:39.102 "trtype": "TCP" 00:11:39.102 }, 00:11:39.102 "qid": 0, 00:11:39.102 "state": "enabled", 00:11:39.102 "thread": "nvmf_tgt_poll_group_000" 00:11:39.102 } 00:11:39.102 ]' 00:11:39.102 20:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:39.102 20:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:39.102 20:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:39.102 20:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:39.102 20:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:39.102 20:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.102 20:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.102 20:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.669 20:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:02:Y2Y2YTVjMTA5ZmNkOTBkNjFkZDU5MDQxMWM2NGZhNmM5YjRlY2NlZmJlZmYwYzgwuDpLrg==: --dhchap-ctrl-secret DHHC-1:01:MDViY2Q2MTEyMDI3MmVhYjRmOGE2OGE0YzY0MmIwMzDXbShd: 00:11:40.233 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.233 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:11:40.233 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.233 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.233 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.233 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:40.233 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:40.233 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:40.489 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:11:40.489 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:40.489 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:40.489 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:40.489 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:40.489 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.489 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key3 00:11:40.489 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.489 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.489 20:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.489 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:40.489 20:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:40.746 00:11:41.004 20:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:41.004 20:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.004 20:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:41.262 20:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.262 20:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.262 20:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.262 20:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.263 20:28:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.263 20:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:41.263 { 00:11:41.263 "auth": { 00:11:41.263 "dhgroup": "null", 00:11:41.263 "digest": "sha256", 00:11:41.263 "state": "completed" 00:11:41.263 }, 00:11:41.263 "cntlid": 7, 00:11:41.263 "listen_address": { 00:11:41.263 "adrfam": "IPv4", 00:11:41.263 "traddr": "10.0.0.2", 00:11:41.263 "trsvcid": "4420", 00:11:41.263 "trtype": "TCP" 00:11:41.263 }, 00:11:41.263 "peer_address": { 00:11:41.263 "adrfam": "IPv4", 00:11:41.263 "traddr": "10.0.0.1", 00:11:41.263 "trsvcid": "51352", 00:11:41.263 "trtype": "TCP" 00:11:41.263 }, 00:11:41.263 "qid": 0, 00:11:41.263 "state": "enabled", 00:11:41.263 "thread": "nvmf_tgt_poll_group_000" 00:11:41.263 } 00:11:41.263 ]' 00:11:41.263 20:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:41.263 20:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:41.263 20:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:41.263 20:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:41.263 20:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:41.263 20:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.263 20:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.263 20:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.830 20:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:03:NzU4MDU0ZjNlNDZkZGJhYmE5MTc1MzczOTNjODRmNmNlNDE3Y2FhY2NmZjRkNTdhODc4YzRmMzRiMTc5OTY2M/hKznc=: 00:11:42.395 20:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.395 20:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:11:42.395 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.395 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.395 20:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.395 20:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:42.395 20:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.395 20:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:42.395 20:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:42.653 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:11:42.653 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.653 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:42.653 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:42.653 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:42.653 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.653 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.653 20:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.653 20:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.653 20:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.653 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.653 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.219 00:11:43.219 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:43.219 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:43.219 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.477 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.477 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.477 20:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.477 20:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.477 20:28:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.477 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.477 { 00:11:43.477 "auth": { 00:11:43.477 "dhgroup": "ffdhe2048", 00:11:43.477 "digest": "sha256", 00:11:43.477 "state": "completed" 00:11:43.477 }, 00:11:43.477 "cntlid": 9, 00:11:43.477 "listen_address": { 00:11:43.477 "adrfam": "IPv4", 00:11:43.477 "traddr": "10.0.0.2", 00:11:43.477 "trsvcid": "4420", 00:11:43.477 "trtype": "TCP" 00:11:43.477 }, 00:11:43.477 "peer_address": { 00:11:43.477 "adrfam": "IPv4", 00:11:43.477 "traddr": "10.0.0.1", 00:11:43.477 "trsvcid": "51376", 00:11:43.477 "trtype": "TCP" 00:11:43.477 }, 00:11:43.477 "qid": 0, 00:11:43.477 "state": "enabled", 00:11:43.477 "thread": "nvmf_tgt_poll_group_000" 00:11:43.477 } 00:11:43.477 ]' 00:11:43.477 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.477 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:43.477 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.477 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:43.477 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.477 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.477 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.477 20:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.043 20:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:00:M2E3ZWFjNTQ1Y2M5OTJlYWQwYmEwNDI2Y2MyM2JiZDk5ZmUzZGE1M2EyOWZlNzBix8QYQA==: --dhchap-ctrl-secret DHHC-1:03:NWU5ZjVhY2NmYzczYmZlMGMyNDQ2YjAxNWE3M2U3NzMxNjU3ZjczYTQ0MmU4YjUxNjY2MmQzNDdiNDllZDIwOIo4Z0U=: 00:11:44.611 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.611 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:11:44.611 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.611 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.611 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.611 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.611 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:44.611 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:44.870 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:11:44.870 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.870 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:44.870 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:44.870 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:44.870 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.870 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.870 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.870 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.870 20:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.870 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.870 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.436 00:11:45.436 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.436 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.436 20:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.694 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.694 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.694 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.694 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.694 20:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.694 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.694 { 00:11:45.694 "auth": { 00:11:45.694 "dhgroup": "ffdhe2048", 00:11:45.695 "digest": "sha256", 00:11:45.695 "state": "completed" 00:11:45.695 }, 00:11:45.695 "cntlid": 11, 00:11:45.695 "listen_address": { 00:11:45.695 "adrfam": "IPv4", 00:11:45.695 "traddr": "10.0.0.2", 00:11:45.695 "trsvcid": "4420", 00:11:45.695 "trtype": "TCP" 00:11:45.695 }, 00:11:45.695 "peer_address": { 00:11:45.695 "adrfam": "IPv4", 00:11:45.695 "traddr": "10.0.0.1", 00:11:45.695 "trsvcid": "51400", 00:11:45.695 "trtype": "TCP" 00:11:45.695 }, 00:11:45.695 "qid": 0, 00:11:45.695 "state": "enabled", 00:11:45.695 "thread": "nvmf_tgt_poll_group_000" 00:11:45.695 } 00:11:45.695 ]' 00:11:45.695 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.695 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:45.695 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.695 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:45.695 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.953 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.953 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.953 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.226 20:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:01:N2NmODBlN2Q0MWIwNjVjNzc1Mzc2ZjE5YTBiYjNkMjOqK9Z2: --dhchap-ctrl-secret DHHC-1:02:NTVkMGI3M2Q0MGJiMDk1YTgyMjQyNTI0MWQ2NzYyNzAzODk3OTMxYmUxZjU3ZDY3F+CkRA==: 00:11:46.789 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.789 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:11:46.789 20:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.789 20:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.789 20:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.789 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.789 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:46.789 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:47.353 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:11:47.353 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:47.353 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:47.353 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:47.353 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:47.353 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.353 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.353 20:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.353 20:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.353 20:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.353 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.353 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.610 00:11:47.610 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.610 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.610 20:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.867 20:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.867 20:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.867 20:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.867 20:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.867 20:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.867 20:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.867 { 00:11:47.867 "auth": { 00:11:47.867 "dhgroup": "ffdhe2048", 00:11:47.867 "digest": "sha256", 00:11:47.867 "state": "completed" 00:11:47.867 }, 00:11:47.867 "cntlid": 13, 00:11:47.867 "listen_address": { 00:11:47.867 "adrfam": "IPv4", 00:11:47.867 "traddr": "10.0.0.2", 00:11:47.867 "trsvcid": "4420", 00:11:47.867 "trtype": "TCP" 00:11:47.867 }, 00:11:47.867 "peer_address": { 00:11:47.867 "adrfam": "IPv4", 00:11:47.867 "traddr": "10.0.0.1", 00:11:47.867 "trsvcid": "51424", 00:11:47.867 "trtype": "TCP" 00:11:47.867 }, 00:11:47.867 "qid": 0, 00:11:47.867 "state": "enabled", 00:11:47.867 "thread": "nvmf_tgt_poll_group_000" 00:11:47.867 } 00:11:47.867 ]' 00:11:47.867 20:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.867 20:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:47.867 20:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:48.126 20:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:48.126 20:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:48.126 20:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.126 20:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.126 20:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.384 20:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:02:Y2Y2YTVjMTA5ZmNkOTBkNjFkZDU5MDQxMWM2NGZhNmM5YjRlY2NlZmJlZmYwYzgwuDpLrg==: --dhchap-ctrl-secret DHHC-1:01:MDViY2Q2MTEyMDI3MmVhYjRmOGE2OGE0YzY0MmIwMzDXbShd: 00:11:49.326 20:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.326 20:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:11:49.326 20:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.326 20:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.326 20:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.326 20:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:49.326 20:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:49.326 20:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:49.606 20:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:11:49.606 20:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:49.606 20:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:49.606 20:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:49.606 20:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:49.606 20:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.606 20:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key3 00:11:49.606 20:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.606 20:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.607 20:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.607 20:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:49.607 20:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:49.864 00:11:49.864 20:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:49.864 20:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:49.864 20:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.122 20:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.122 20:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.122 20:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.122 20:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.122 20:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.122 20:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:50.122 { 00:11:50.122 "auth": { 00:11:50.122 "dhgroup": "ffdhe2048", 00:11:50.122 "digest": "sha256", 00:11:50.122 "state": "completed" 00:11:50.122 }, 00:11:50.122 "cntlid": 15, 00:11:50.122 "listen_address": { 00:11:50.122 "adrfam": "IPv4", 00:11:50.122 "traddr": "10.0.0.2", 00:11:50.122 "trsvcid": "4420", 00:11:50.122 "trtype": "TCP" 00:11:50.122 }, 00:11:50.122 "peer_address": { 00:11:50.122 "adrfam": "IPv4", 00:11:50.122 "traddr": "10.0.0.1", 00:11:50.122 "trsvcid": "49468", 00:11:50.122 "trtype": "TCP" 00:11:50.122 }, 00:11:50.122 "qid": 0, 00:11:50.122 "state": "enabled", 00:11:50.122 "thread": "nvmf_tgt_poll_group_000" 00:11:50.122 } 00:11:50.122 ]' 00:11:50.122 20:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:50.122 20:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:50.122 20:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:50.122 20:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:50.122 20:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:50.379 20:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.379 20:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.379 20:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.638 20:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:03:NzU4MDU0ZjNlNDZkZGJhYmE5MTc1MzczOTNjODRmNmNlNDE3Y2FhY2NmZjRkNTdhODc4YzRmMzRiMTc5OTY2M/hKznc=: 00:11:51.205 20:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.205 20:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:11:51.205 20:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.205 20:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.205 20:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.205 20:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:51.205 20:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:51.205 20:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:51.205 20:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:51.464 20:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:11:51.464 20:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:51.464 20:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:51.464 20:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:51.464 20:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:51.464 20:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.464 20:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.464 20:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.464 20:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.747 20:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.747 20:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.747 20:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:52.005 00:11:52.005 20:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:52.005 20:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.005 20:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:52.262 20:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.262 20:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.262 20:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.262 20:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.262 20:28:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.262 20:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:52.262 { 00:11:52.262 "auth": { 00:11:52.262 "dhgroup": "ffdhe3072", 00:11:52.262 "digest": "sha256", 00:11:52.262 "state": "completed" 00:11:52.262 }, 00:11:52.262 "cntlid": 17, 00:11:52.262 "listen_address": { 00:11:52.262 "adrfam": "IPv4", 00:11:52.262 "traddr": "10.0.0.2", 00:11:52.262 "trsvcid": "4420", 00:11:52.262 "trtype": "TCP" 00:11:52.262 }, 00:11:52.262 "peer_address": { 00:11:52.262 "adrfam": "IPv4", 00:11:52.262 "traddr": "10.0.0.1", 00:11:52.262 "trsvcid": "49494", 00:11:52.262 "trtype": "TCP" 00:11:52.262 }, 00:11:52.262 "qid": 0, 00:11:52.262 "state": "enabled", 00:11:52.262 "thread": "nvmf_tgt_poll_group_000" 00:11:52.262 } 00:11:52.262 ]' 00:11:52.262 20:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:52.262 20:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:52.262 20:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:52.262 20:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:52.262 20:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:52.520 20:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.520 20:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.520 20:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.781 20:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:00:M2E3ZWFjNTQ1Y2M5OTJlYWQwYmEwNDI2Y2MyM2JiZDk5ZmUzZGE1M2EyOWZlNzBix8QYQA==: --dhchap-ctrl-secret DHHC-1:03:NWU5ZjVhY2NmYzczYmZlMGMyNDQ2YjAxNWE3M2U3NzMxNjU3ZjczYTQ0MmU4YjUxNjY2MmQzNDdiNDllZDIwOIo4Z0U=: 00:11:53.373 20:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.373 20:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:11:53.373 20:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.373 20:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.373 20:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.373 20:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:53.373 20:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:53.373 20:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:53.935 20:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:11:53.935 20:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:53.935 20:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:53.935 20:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:53.935 20:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:53.935 20:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.935 20:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.935 20:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.935 20:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.935 20:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.935 20:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.935 20:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:54.193 00:11:54.193 20:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:54.193 20:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.193 20:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:54.450 20:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.450 20:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.450 20:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.450 20:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.450 20:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.450 20:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:54.451 { 00:11:54.451 "auth": { 00:11:54.451 "dhgroup": "ffdhe3072", 00:11:54.451 "digest": "sha256", 00:11:54.451 "state": "completed" 00:11:54.451 }, 00:11:54.451 "cntlid": 19, 00:11:54.451 "listen_address": { 00:11:54.451 "adrfam": "IPv4", 00:11:54.451 "traddr": "10.0.0.2", 00:11:54.451 "trsvcid": "4420", 00:11:54.451 "trtype": "TCP" 00:11:54.451 }, 00:11:54.451 "peer_address": { 00:11:54.451 "adrfam": "IPv4", 00:11:54.451 "traddr": "10.0.0.1", 00:11:54.451 "trsvcid": "49512", 00:11:54.451 "trtype": "TCP" 00:11:54.451 }, 00:11:54.451 "qid": 0, 00:11:54.451 "state": "enabled", 00:11:54.451 "thread": "nvmf_tgt_poll_group_000" 00:11:54.451 } 00:11:54.451 ]' 00:11:54.451 20:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:54.451 20:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:54.451 20:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:54.708 20:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:54.708 20:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:54.708 20:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.708 20:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.708 20:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.966 20:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:01:N2NmODBlN2Q0MWIwNjVjNzc1Mzc2ZjE5YTBiYjNkMjOqK9Z2: --dhchap-ctrl-secret DHHC-1:02:NTVkMGI3M2Q0MGJiMDk1YTgyMjQyNTI0MWQ2NzYyNzAzODk3OTMxYmUxZjU3ZDY3F+CkRA==: 00:11:55.899 20:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.899 20:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:11:55.899 20:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.899 20:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.899 20:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.899 20:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:55.899 20:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:55.899 20:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:55.899 20:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:11:55.899 20:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:55.899 20:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:55.899 20:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:55.899 20:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:55.899 20:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.899 20:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.899 20:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.899 20:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.899 20:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.899 20:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.899 20:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:56.465 00:11:56.465 20:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:56.465 20:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:56.465 20:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.756 20:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.756 20:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.756 20:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.756 20:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.756 20:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.756 20:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:56.756 { 00:11:56.756 "auth": { 00:11:56.756 "dhgroup": "ffdhe3072", 00:11:56.756 "digest": "sha256", 00:11:56.756 "state": "completed" 00:11:56.756 }, 00:11:56.756 "cntlid": 21, 00:11:56.756 "listen_address": { 00:11:56.756 "adrfam": "IPv4", 00:11:56.756 "traddr": "10.0.0.2", 00:11:56.756 "trsvcid": "4420", 00:11:56.756 "trtype": "TCP" 00:11:56.756 }, 00:11:56.756 "peer_address": { 00:11:56.756 "adrfam": "IPv4", 00:11:56.756 "traddr": "10.0.0.1", 00:11:56.756 "trsvcid": "49538", 00:11:56.756 "trtype": "TCP" 00:11:56.756 }, 00:11:56.756 "qid": 0, 00:11:56.756 "state": "enabled", 00:11:56.756 "thread": "nvmf_tgt_poll_group_000" 00:11:56.756 } 00:11:56.756 ]' 00:11:56.756 20:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:56.756 20:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:56.756 20:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:57.027 20:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:57.027 20:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:57.027 20:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.027 20:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.027 20:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.285 20:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:02:Y2Y2YTVjMTA5ZmNkOTBkNjFkZDU5MDQxMWM2NGZhNmM5YjRlY2NlZmJlZmYwYzgwuDpLrg==: --dhchap-ctrl-secret DHHC-1:01:MDViY2Q2MTEyMDI3MmVhYjRmOGE2OGE0YzY0MmIwMzDXbShd: 00:11:57.851 20:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.851 20:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:11:57.851 20:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.851 20:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.110 20:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.110 20:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.110 20:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:58.110 20:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:58.368 20:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:11:58.368 20:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.368 20:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:58.368 20:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:58.368 20:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:58.368 20:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.368 20:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key3 00:11:58.368 20:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.368 20:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.368 20:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.368 20:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:58.368 20:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:58.626 00:11:58.626 20:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:58.626 20:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:58.626 20:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:58.884 20:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:58.884 20:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:58.884 20:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.884 20:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.142 20:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.142 20:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.142 { 00:11:59.142 "auth": { 00:11:59.142 "dhgroup": "ffdhe3072", 00:11:59.142 "digest": "sha256", 00:11:59.142 "state": "completed" 00:11:59.142 }, 00:11:59.142 "cntlid": 23, 00:11:59.142 "listen_address": { 00:11:59.142 "adrfam": "IPv4", 00:11:59.142 "traddr": "10.0.0.2", 00:11:59.142 "trsvcid": "4420", 00:11:59.142 "trtype": "TCP" 00:11:59.142 }, 00:11:59.142 "peer_address": { 00:11:59.142 "adrfam": "IPv4", 00:11:59.142 "traddr": "10.0.0.1", 00:11:59.142 "trsvcid": "42922", 00:11:59.142 "trtype": "TCP" 00:11:59.142 }, 00:11:59.142 "qid": 0, 00:11:59.142 "state": "enabled", 00:11:59.142 "thread": "nvmf_tgt_poll_group_000" 00:11:59.142 } 00:11:59.142 ]' 00:11:59.142 20:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.142 20:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:59.142 20:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.142 20:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:59.142 20:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:59.142 20:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.142 20:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.142 20:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:59.401 20:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:03:NzU4MDU0ZjNlNDZkZGJhYmE5MTc1MzczOTNjODRmNmNlNDE3Y2FhY2NmZjRkNTdhODc4YzRmMzRiMTc5OTY2M/hKznc=: 00:12:00.336 20:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.336 20:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:00.336 20:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.337 20:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.337 20:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.337 20:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:00.337 20:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.337 20:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:00.337 20:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:00.337 20:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:12:00.337 20:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:00.337 20:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:00.337 20:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:00.337 20:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:00.337 20:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.337 20:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.337 20:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.337 20:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.337 20:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.337 20:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.337 20:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:00.903 00:12:00.903 20:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:00.903 20:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:00.903 20:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:01.162 20:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.162 20:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.162 20:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.162 20:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.162 20:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.162 20:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:01.162 { 00:12:01.162 "auth": { 00:12:01.162 "dhgroup": "ffdhe4096", 00:12:01.162 "digest": "sha256", 00:12:01.162 "state": "completed" 00:12:01.162 }, 00:12:01.162 "cntlid": 25, 00:12:01.162 "listen_address": { 00:12:01.162 "adrfam": "IPv4", 00:12:01.162 "traddr": "10.0.0.2", 00:12:01.162 "trsvcid": "4420", 00:12:01.162 "trtype": "TCP" 00:12:01.162 }, 00:12:01.162 "peer_address": { 00:12:01.162 "adrfam": "IPv4", 00:12:01.162 "traddr": "10.0.0.1", 00:12:01.162 "trsvcid": "42960", 00:12:01.162 "trtype": "TCP" 00:12:01.162 }, 00:12:01.162 "qid": 0, 00:12:01.162 "state": "enabled", 00:12:01.162 "thread": "nvmf_tgt_poll_group_000" 00:12:01.162 } 00:12:01.162 ]' 00:12:01.162 20:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:01.162 20:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:01.162 20:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:01.162 20:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:01.162 20:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:01.162 20:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.162 20:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.162 20:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:01.729 20:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:00:M2E3ZWFjNTQ1Y2M5OTJlYWQwYmEwNDI2Y2MyM2JiZDk5ZmUzZGE1M2EyOWZlNzBix8QYQA==: --dhchap-ctrl-secret DHHC-1:03:NWU5ZjVhY2NmYzczYmZlMGMyNDQ2YjAxNWE3M2U3NzMxNjU3ZjczYTQ0MmU4YjUxNjY2MmQzNDdiNDllZDIwOIo4Z0U=: 00:12:02.294 20:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.294 20:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:02.294 20:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.294 20:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.294 20:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.294 20:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:02.294 20:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:02.294 20:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:02.860 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:12:02.860 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:02.860 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:02.860 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:02.860 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:02.860 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.860 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.860 20:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.860 20:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.860 20:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.860 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.860 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:03.118 00:12:03.118 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:03.118 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:03.118 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.377 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.377 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.377 20:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.377 20:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.377 20:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.377 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:03.377 { 00:12:03.377 "auth": { 00:12:03.377 "dhgroup": "ffdhe4096", 00:12:03.377 "digest": "sha256", 00:12:03.377 "state": "completed" 00:12:03.377 }, 00:12:03.377 "cntlid": 27, 00:12:03.377 "listen_address": { 00:12:03.377 "adrfam": "IPv4", 00:12:03.377 "traddr": "10.0.0.2", 00:12:03.377 "trsvcid": "4420", 00:12:03.377 "trtype": "TCP" 00:12:03.377 }, 00:12:03.377 "peer_address": { 00:12:03.377 "adrfam": "IPv4", 00:12:03.377 "traddr": "10.0.0.1", 00:12:03.377 "trsvcid": "42974", 00:12:03.377 "trtype": "TCP" 00:12:03.377 }, 00:12:03.377 "qid": 0, 00:12:03.377 "state": "enabled", 00:12:03.377 "thread": "nvmf_tgt_poll_group_000" 00:12:03.377 } 00:12:03.377 ]' 00:12:03.377 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:03.636 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:03.636 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:03.636 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:03.636 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:03.636 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.636 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.636 20:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:03.894 20:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:01:N2NmODBlN2Q0MWIwNjVjNzc1Mzc2ZjE5YTBiYjNkMjOqK9Z2: --dhchap-ctrl-secret DHHC-1:02:NTVkMGI3M2Q0MGJiMDk1YTgyMjQyNTI0MWQ2NzYyNzAzODk3OTMxYmUxZjU3ZDY3F+CkRA==: 00:12:04.854 20:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:04.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:04.854 20:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:04.854 20:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.854 20:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.854 20:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.854 20:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:04.854 20:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:04.854 20:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:04.854 20:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:12:04.854 20:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:04.854 20:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:04.854 20:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:04.854 20:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:04.854 20:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.854 20:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.854 20:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.854 20:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.112 20:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.112 20:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.112 20:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:05.370 00:12:05.370 20:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:05.370 20:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:05.370 20:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.627 20:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.627 20:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.627 20:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.627 20:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.627 20:28:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.627 20:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:05.627 { 00:12:05.627 "auth": { 00:12:05.627 "dhgroup": "ffdhe4096", 00:12:05.627 "digest": "sha256", 00:12:05.627 "state": "completed" 00:12:05.627 }, 00:12:05.627 "cntlid": 29, 00:12:05.627 "listen_address": { 00:12:05.627 "adrfam": "IPv4", 00:12:05.627 "traddr": "10.0.0.2", 00:12:05.627 "trsvcid": "4420", 00:12:05.627 "trtype": "TCP" 00:12:05.627 }, 00:12:05.627 "peer_address": { 00:12:05.627 "adrfam": "IPv4", 00:12:05.627 "traddr": "10.0.0.1", 00:12:05.627 "trsvcid": "43012", 00:12:05.627 "trtype": "TCP" 00:12:05.627 }, 00:12:05.627 "qid": 0, 00:12:05.627 "state": "enabled", 00:12:05.627 "thread": "nvmf_tgt_poll_group_000" 00:12:05.627 } 00:12:05.627 ]' 00:12:05.627 20:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:05.627 20:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:05.627 20:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:05.884 20:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:05.884 20:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:05.884 20:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.884 20:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.884 20:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.140 20:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:02:Y2Y2YTVjMTA5ZmNkOTBkNjFkZDU5MDQxMWM2NGZhNmM5YjRlY2NlZmJlZmYwYzgwuDpLrg==: --dhchap-ctrl-secret DHHC-1:01:MDViY2Q2MTEyMDI3MmVhYjRmOGE2OGE0YzY0MmIwMzDXbShd: 00:12:06.707 20:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.964 20:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:06.964 20:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.964 20:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.964 20:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.964 20:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:06.964 20:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:06.964 20:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:07.221 20:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:12:07.221 20:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:07.221 20:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:07.221 20:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:07.221 20:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:07.221 20:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.221 20:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key3 00:12:07.221 20:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.221 20:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.221 20:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.221 20:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:07.221 20:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:07.479 00:12:07.479 20:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:07.479 20:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:07.479 20:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.738 20:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.738 20:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.738 20:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.738 20:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.738 20:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.738 20:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:07.738 { 00:12:07.738 "auth": { 00:12:07.738 "dhgroup": "ffdhe4096", 00:12:07.738 "digest": "sha256", 00:12:07.738 "state": "completed" 00:12:07.738 }, 00:12:07.738 "cntlid": 31, 00:12:07.738 "listen_address": { 00:12:07.738 "adrfam": "IPv4", 00:12:07.738 "traddr": "10.0.0.2", 00:12:07.738 "trsvcid": "4420", 00:12:07.738 "trtype": "TCP" 00:12:07.738 }, 00:12:07.738 "peer_address": { 00:12:07.738 "adrfam": "IPv4", 00:12:07.738 "traddr": "10.0.0.1", 00:12:07.738 "trsvcid": "43040", 00:12:07.738 "trtype": "TCP" 00:12:07.738 }, 00:12:07.738 "qid": 0, 00:12:07.738 "state": "enabled", 00:12:07.738 "thread": "nvmf_tgt_poll_group_000" 00:12:07.738 } 00:12:07.738 ]' 00:12:07.738 20:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:07.995 20:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:07.995 20:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:07.995 20:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:07.995 20:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:07.995 20:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.995 20:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.995 20:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.252 20:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:03:NzU4MDU0ZjNlNDZkZGJhYmE5MTc1MzczOTNjODRmNmNlNDE3Y2FhY2NmZjRkNTdhODc4YzRmMzRiMTc5OTY2M/hKznc=: 00:12:09.185 20:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.185 20:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:09.185 20:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.185 20:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.185 20:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.185 20:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:09.185 20:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:09.185 20:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:09.185 20:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:09.443 20:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:12:09.443 20:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:09.443 20:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:09.443 20:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:09.443 20:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:09.443 20:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.443 20:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.443 20:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.443 20:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.443 20:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.443 20:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.443 20:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.008 00:12:10.008 20:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:10.008 20:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.008 20:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:10.266 20:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.266 20:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.266 20:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.266 20:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.266 20:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.266 20:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:10.266 { 00:12:10.266 "auth": { 00:12:10.266 "dhgroup": "ffdhe6144", 00:12:10.266 "digest": "sha256", 00:12:10.266 "state": "completed" 00:12:10.266 }, 00:12:10.266 "cntlid": 33, 00:12:10.266 "listen_address": { 00:12:10.266 "adrfam": "IPv4", 00:12:10.266 "traddr": "10.0.0.2", 00:12:10.266 "trsvcid": "4420", 00:12:10.266 "trtype": "TCP" 00:12:10.266 }, 00:12:10.266 "peer_address": { 00:12:10.266 "adrfam": "IPv4", 00:12:10.266 "traddr": "10.0.0.1", 00:12:10.266 "trsvcid": "45546", 00:12:10.266 "trtype": "TCP" 00:12:10.266 }, 00:12:10.266 "qid": 0, 00:12:10.266 "state": "enabled", 00:12:10.266 "thread": "nvmf_tgt_poll_group_000" 00:12:10.266 } 00:12:10.266 ]' 00:12:10.266 20:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:10.266 20:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:10.266 20:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:10.266 20:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:10.266 20:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:10.266 20:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:10.266 20:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:10.266 20:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.524 20:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:00:M2E3ZWFjNTQ1Y2M5OTJlYWQwYmEwNDI2Y2MyM2JiZDk5ZmUzZGE1M2EyOWZlNzBix8QYQA==: --dhchap-ctrl-secret DHHC-1:03:NWU5ZjVhY2NmYzczYmZlMGMyNDQ2YjAxNWE3M2U3NzMxNjU3ZjczYTQ0MmU4YjUxNjY2MmQzNDdiNDllZDIwOIo4Z0U=: 00:12:11.458 20:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.458 20:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:11.458 20:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.458 20:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.458 20:28:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.458 20:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:11.458 20:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:11.458 20:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:11.716 20:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:12:11.716 20:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:11.716 20:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:11.716 20:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:11.716 20:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:11.716 20:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.716 20:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.716 20:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.716 20:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.716 20:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.716 20:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.716 20:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.296 00:12:12.296 20:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:12.296 20:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:12.296 20:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.571 20:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:12.571 20:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:12.571 20:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.571 20:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.571 20:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.571 20:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:12.571 { 00:12:12.571 "auth": { 00:12:12.571 "dhgroup": "ffdhe6144", 00:12:12.571 "digest": "sha256", 00:12:12.571 "state": "completed" 00:12:12.571 }, 00:12:12.571 "cntlid": 35, 00:12:12.571 "listen_address": { 00:12:12.571 "adrfam": "IPv4", 00:12:12.571 "traddr": "10.0.0.2", 00:12:12.571 "trsvcid": "4420", 00:12:12.571 "trtype": "TCP" 00:12:12.571 }, 00:12:12.571 "peer_address": { 00:12:12.571 "adrfam": "IPv4", 00:12:12.571 "traddr": "10.0.0.1", 00:12:12.571 "trsvcid": "45578", 00:12:12.571 "trtype": "TCP" 00:12:12.571 }, 00:12:12.571 "qid": 0, 00:12:12.571 "state": "enabled", 00:12:12.571 "thread": "nvmf_tgt_poll_group_000" 00:12:12.571 } 00:12:12.571 ]' 00:12:12.571 20:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:12.571 20:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:12.571 20:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:12.571 20:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:12.571 20:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:12.571 20:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.571 20:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.571 20:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.829 20:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:01:N2NmODBlN2Q0MWIwNjVjNzc1Mzc2ZjE5YTBiYjNkMjOqK9Z2: --dhchap-ctrl-secret DHHC-1:02:NTVkMGI3M2Q0MGJiMDk1YTgyMjQyNTI0MWQ2NzYyNzAzODk3OTMxYmUxZjU3ZDY3F+CkRA==: 00:12:13.768 20:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:13.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:13.768 20:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:13.768 20:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.768 20:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.768 20:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.768 20:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:13.768 20:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:13.768 20:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:14.027 20:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:12:14.027 20:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:14.027 20:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:14.027 20:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:14.027 20:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:14.027 20:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.027 20:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.027 20:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.028 20:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.028 20:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.028 20:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.028 20:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:14.594 00:12:14.594 20:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:14.594 20:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.594 20:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:14.594 20:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.594 20:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.594 20:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.594 20:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.594 20:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.594 20:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:14.594 { 00:12:14.594 "auth": { 00:12:14.594 "dhgroup": "ffdhe6144", 00:12:14.594 "digest": "sha256", 00:12:14.594 "state": "completed" 00:12:14.594 }, 00:12:14.594 "cntlid": 37, 00:12:14.594 "listen_address": { 00:12:14.594 "adrfam": "IPv4", 00:12:14.594 "traddr": "10.0.0.2", 00:12:14.594 "trsvcid": "4420", 00:12:14.594 "trtype": "TCP" 00:12:14.594 }, 00:12:14.594 "peer_address": { 00:12:14.594 "adrfam": "IPv4", 00:12:14.594 "traddr": "10.0.0.1", 00:12:14.594 "trsvcid": "45608", 00:12:14.594 "trtype": "TCP" 00:12:14.594 }, 00:12:14.594 "qid": 0, 00:12:14.594 "state": "enabled", 00:12:14.594 "thread": "nvmf_tgt_poll_group_000" 00:12:14.594 } 00:12:14.594 ]' 00:12:14.594 20:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:14.853 20:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:14.853 20:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:14.853 20:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:14.853 20:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:14.853 20:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.853 20:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.853 20:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.111 20:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:02:Y2Y2YTVjMTA5ZmNkOTBkNjFkZDU5MDQxMWM2NGZhNmM5YjRlY2NlZmJlZmYwYzgwuDpLrg==: --dhchap-ctrl-secret DHHC-1:01:MDViY2Q2MTEyMDI3MmVhYjRmOGE2OGE0YzY0MmIwMzDXbShd: 00:12:16.047 20:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.047 20:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:16.047 20:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.047 20:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.047 20:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.047 20:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:16.047 20:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:16.047 20:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:16.047 20:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:12:16.047 20:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:16.047 20:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:16.047 20:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:16.047 20:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:16.047 20:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.047 20:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key3 00:12:16.047 20:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.047 20:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.047 20:28:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.047 20:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:16.047 20:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:16.613 00:12:16.614 20:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:16.614 20:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:16.614 20:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:16.872 20:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:16.872 20:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:16.873 20:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.873 20:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.873 20:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.873 20:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:16.873 { 00:12:16.873 "auth": { 00:12:16.873 "dhgroup": "ffdhe6144", 00:12:16.873 "digest": "sha256", 00:12:16.873 "state": "completed" 00:12:16.873 }, 00:12:16.873 "cntlid": 39, 00:12:16.873 "listen_address": { 00:12:16.873 "adrfam": "IPv4", 00:12:16.873 "traddr": "10.0.0.2", 00:12:16.873 "trsvcid": "4420", 00:12:16.873 "trtype": "TCP" 00:12:16.873 }, 00:12:16.873 "peer_address": { 00:12:16.873 "adrfam": "IPv4", 00:12:16.873 "traddr": "10.0.0.1", 00:12:16.873 "trsvcid": "45638", 00:12:16.873 "trtype": "TCP" 00:12:16.873 }, 00:12:16.873 "qid": 0, 00:12:16.873 "state": "enabled", 00:12:16.873 "thread": "nvmf_tgt_poll_group_000" 00:12:16.873 } 00:12:16.873 ]' 00:12:16.873 20:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:17.131 20:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:17.131 20:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:17.131 20:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:17.131 20:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:17.131 20:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.131 20:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.131 20:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.390 20:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:03:NzU4MDU0ZjNlNDZkZGJhYmE5MTc1MzczOTNjODRmNmNlNDE3Y2FhY2NmZjRkNTdhODc4YzRmMzRiMTc5OTY2M/hKznc=: 00:12:18.329 20:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.329 20:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:18.329 20:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.329 20:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.329 20:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.329 20:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:18.329 20:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:18.329 20:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:18.329 20:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:18.587 20:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:12:18.587 20:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:18.587 20:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:18.587 20:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:18.587 20:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:18.587 20:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.587 20:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.587 20:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.587 20:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.587 20:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.587 20:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:18.587 20:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:19.154 00:12:19.154 20:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:19.154 20:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:19.154 20:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.413 20:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.413 20:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.413 20:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.413 20:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.413 20:28:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.413 20:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:19.413 { 00:12:19.413 "auth": { 00:12:19.413 "dhgroup": "ffdhe8192", 00:12:19.413 "digest": "sha256", 00:12:19.413 "state": "completed" 00:12:19.413 }, 00:12:19.413 "cntlid": 41, 00:12:19.413 "listen_address": { 00:12:19.413 "adrfam": "IPv4", 00:12:19.413 "traddr": "10.0.0.2", 00:12:19.413 "trsvcid": "4420", 00:12:19.413 "trtype": "TCP" 00:12:19.413 }, 00:12:19.413 "peer_address": { 00:12:19.413 "adrfam": "IPv4", 00:12:19.413 "traddr": "10.0.0.1", 00:12:19.413 "trsvcid": "57780", 00:12:19.413 "trtype": "TCP" 00:12:19.413 }, 00:12:19.413 "qid": 0, 00:12:19.413 "state": "enabled", 00:12:19.413 "thread": "nvmf_tgt_poll_group_000" 00:12:19.413 } 00:12:19.413 ]' 00:12:19.413 20:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:19.413 20:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:19.413 20:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:19.672 20:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:19.672 20:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:19.672 20:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.672 20:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.672 20:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.951 20:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:00:M2E3ZWFjNTQ1Y2M5OTJlYWQwYmEwNDI2Y2MyM2JiZDk5ZmUzZGE1M2EyOWZlNzBix8QYQA==: --dhchap-ctrl-secret DHHC-1:03:NWU5ZjVhY2NmYzczYmZlMGMyNDQ2YjAxNWE3M2U3NzMxNjU3ZjczYTQ0MmU4YjUxNjY2MmQzNDdiNDllZDIwOIo4Z0U=: 00:12:20.889 20:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.889 20:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:20.889 20:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.889 20:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.889 20:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.889 20:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:20.889 20:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:20.889 20:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:20.889 20:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:12:20.889 20:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:20.889 20:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:20.889 20:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:20.889 20:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:20.889 20:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.889 20:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.889 20:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.889 20:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.889 20:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.889 20:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:20.889 20:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:21.825 00:12:21.825 20:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:21.825 20:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:21.825 20:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.825 20:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.825 20:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.825 20:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.825 20:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.825 20:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.825 20:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:21.825 { 00:12:21.825 "auth": { 00:12:21.825 "dhgroup": "ffdhe8192", 00:12:21.825 "digest": "sha256", 00:12:21.825 "state": "completed" 00:12:21.825 }, 00:12:21.825 "cntlid": 43, 00:12:21.825 "listen_address": { 00:12:21.825 "adrfam": "IPv4", 00:12:21.825 "traddr": "10.0.0.2", 00:12:21.825 "trsvcid": "4420", 00:12:21.825 "trtype": "TCP" 00:12:21.825 }, 00:12:21.825 "peer_address": { 00:12:21.825 "adrfam": "IPv4", 00:12:21.825 "traddr": "10.0.0.1", 00:12:21.825 "trsvcid": "57818", 00:12:21.825 "trtype": "TCP" 00:12:21.825 }, 00:12:21.825 "qid": 0, 00:12:21.825 "state": "enabled", 00:12:21.825 "thread": "nvmf_tgt_poll_group_000" 00:12:21.825 } 00:12:21.825 ]' 00:12:21.825 20:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:22.083 20:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:22.083 20:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:22.083 20:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:22.083 20:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:22.083 20:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.083 20:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.083 20:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.341 20:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:01:N2NmODBlN2Q0MWIwNjVjNzc1Mzc2ZjE5YTBiYjNkMjOqK9Z2: --dhchap-ctrl-secret DHHC-1:02:NTVkMGI3M2Q0MGJiMDk1YTgyMjQyNTI0MWQ2NzYyNzAzODk3OTMxYmUxZjU3ZDY3F+CkRA==: 00:12:23.276 20:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.276 20:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:23.276 20:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.276 20:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.277 20:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.277 20:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:23.277 20:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:23.277 20:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:23.277 20:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:12:23.277 20:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:23.277 20:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:23.277 20:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:23.277 20:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:23.277 20:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.277 20:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.277 20:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.277 20:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.277 20:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.277 20:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:23.277 20:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.207 00:12:24.207 20:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:24.207 20:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:24.207 20:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.207 20:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.207 20:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.207 20:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.207 20:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.207 20:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.207 20:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:24.207 { 00:12:24.207 "auth": { 00:12:24.207 "dhgroup": "ffdhe8192", 00:12:24.207 "digest": "sha256", 00:12:24.207 "state": "completed" 00:12:24.207 }, 00:12:24.207 "cntlid": 45, 00:12:24.207 "listen_address": { 00:12:24.207 "adrfam": "IPv4", 00:12:24.207 "traddr": "10.0.0.2", 00:12:24.207 "trsvcid": "4420", 00:12:24.207 "trtype": "TCP" 00:12:24.207 }, 00:12:24.207 "peer_address": { 00:12:24.207 "adrfam": "IPv4", 00:12:24.207 "traddr": "10.0.0.1", 00:12:24.207 "trsvcid": "57848", 00:12:24.207 "trtype": "TCP" 00:12:24.207 }, 00:12:24.207 "qid": 0, 00:12:24.207 "state": "enabled", 00:12:24.207 "thread": "nvmf_tgt_poll_group_000" 00:12:24.207 } 00:12:24.207 ]' 00:12:24.207 20:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:24.207 20:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:24.207 20:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:24.464 20:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:24.464 20:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:24.464 20:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.464 20:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.464 20:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.721 20:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:02:Y2Y2YTVjMTA5ZmNkOTBkNjFkZDU5MDQxMWM2NGZhNmM5YjRlY2NlZmJlZmYwYzgwuDpLrg==: --dhchap-ctrl-secret DHHC-1:01:MDViY2Q2MTEyMDI3MmVhYjRmOGE2OGE0YzY0MmIwMzDXbShd: 00:12:25.286 20:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.286 20:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:25.286 20:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.286 20:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.286 20:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.286 20:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:25.286 20:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:25.286 20:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:25.545 20:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:12:25.545 20:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:25.545 20:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:12:25.545 20:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:25.545 20:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:25.545 20:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.801 20:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key3 00:12:25.801 20:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.801 20:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.801 20:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.801 20:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:25.801 20:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:26.362 00:12:26.362 20:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:26.362 20:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.362 20:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:26.619 20:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.619 20:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.619 20:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.619 20:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.619 20:28:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.619 20:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:26.619 { 00:12:26.619 "auth": { 00:12:26.619 "dhgroup": "ffdhe8192", 00:12:26.619 "digest": "sha256", 00:12:26.619 "state": "completed" 00:12:26.619 }, 00:12:26.619 "cntlid": 47, 00:12:26.619 "listen_address": { 00:12:26.619 "adrfam": "IPv4", 00:12:26.619 "traddr": "10.0.0.2", 00:12:26.619 "trsvcid": "4420", 00:12:26.619 "trtype": "TCP" 00:12:26.619 }, 00:12:26.619 "peer_address": { 00:12:26.619 "adrfam": "IPv4", 00:12:26.619 "traddr": "10.0.0.1", 00:12:26.619 "trsvcid": "57878", 00:12:26.619 "trtype": "TCP" 00:12:26.619 }, 00:12:26.619 "qid": 0, 00:12:26.619 "state": "enabled", 00:12:26.619 "thread": "nvmf_tgt_poll_group_000" 00:12:26.619 } 00:12:26.619 ]' 00:12:26.619 20:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:26.619 20:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:26.619 20:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:26.876 20:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:26.876 20:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:26.876 20:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.876 20:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.876 20:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.132 20:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:03:NzU4MDU0ZjNlNDZkZGJhYmE5MTc1MzczOTNjODRmNmNlNDE3Y2FhY2NmZjRkNTdhODc4YzRmMzRiMTc5OTY2M/hKznc=: 00:12:28.061 20:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.061 20:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:28.061 20:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.061 20:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.061 20:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.061 20:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:28.061 20:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:28.061 20:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:28.061 20:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:28.061 20:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:28.061 20:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:12:28.061 20:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:28.061 20:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:28.061 20:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:28.061 20:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:28.061 20:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.061 20:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.061 20:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.061 20:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.319 20:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.319 20:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.319 20:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.575 00:12:28.575 20:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:28.575 20:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:28.575 20:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.832 20:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.832 20:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.832 20:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.832 20:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.832 20:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.832 20:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:28.832 { 00:12:28.832 "auth": { 00:12:28.832 "dhgroup": "null", 00:12:28.832 "digest": "sha384", 00:12:28.832 "state": "completed" 00:12:28.832 }, 00:12:28.832 "cntlid": 49, 00:12:28.832 "listen_address": { 00:12:28.832 "adrfam": "IPv4", 00:12:28.832 "traddr": "10.0.0.2", 00:12:28.832 "trsvcid": "4420", 00:12:28.832 "trtype": "TCP" 00:12:28.832 }, 00:12:28.832 "peer_address": { 00:12:28.832 "adrfam": "IPv4", 00:12:28.832 "traddr": "10.0.0.1", 00:12:28.832 "trsvcid": "46410", 00:12:28.832 "trtype": "TCP" 00:12:28.832 }, 00:12:28.832 "qid": 0, 00:12:28.832 "state": "enabled", 00:12:28.832 "thread": "nvmf_tgt_poll_group_000" 00:12:28.832 } 00:12:28.832 ]' 00:12:28.832 20:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:28.832 20:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:28.832 20:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:28.832 20:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:28.832 20:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:29.088 20:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.088 20:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.088 20:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.344 20:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:00:M2E3ZWFjNTQ1Y2M5OTJlYWQwYmEwNDI2Y2MyM2JiZDk5ZmUzZGE1M2EyOWZlNzBix8QYQA==: --dhchap-ctrl-secret DHHC-1:03:NWU5ZjVhY2NmYzczYmZlMGMyNDQ2YjAxNWE3M2U3NzMxNjU3ZjczYTQ0MmU4YjUxNjY2MmQzNDdiNDllZDIwOIo4Z0U=: 00:12:30.274 20:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.274 20:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:30.274 20:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.274 20:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.274 20:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.274 20:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:30.274 20:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:30.274 20:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:30.531 20:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:12:30.531 20:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:30.531 20:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:30.531 20:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:30.531 20:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:30.531 20:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.531 20:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.531 20:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.531 20:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.531 20:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.531 20:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.531 20:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.792 00:12:30.792 20:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:30.792 20:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.792 20:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:31.055 20:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.055 20:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.055 20:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.055 20:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.055 20:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.055 20:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:31.055 { 00:12:31.055 "auth": { 00:12:31.055 "dhgroup": "null", 00:12:31.055 "digest": "sha384", 00:12:31.055 "state": "completed" 00:12:31.055 }, 00:12:31.055 "cntlid": 51, 00:12:31.055 "listen_address": { 00:12:31.055 "adrfam": "IPv4", 00:12:31.055 "traddr": "10.0.0.2", 00:12:31.055 "trsvcid": "4420", 00:12:31.055 "trtype": "TCP" 00:12:31.055 }, 00:12:31.055 "peer_address": { 00:12:31.055 "adrfam": "IPv4", 00:12:31.055 "traddr": "10.0.0.1", 00:12:31.055 "trsvcid": "46432", 00:12:31.055 "trtype": "TCP" 00:12:31.055 }, 00:12:31.055 "qid": 0, 00:12:31.055 "state": "enabled", 00:12:31.055 "thread": "nvmf_tgt_poll_group_000" 00:12:31.055 } 00:12:31.055 ]' 00:12:31.055 20:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:31.055 20:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:31.055 20:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:31.313 20:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:31.313 20:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:31.313 20:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.313 20:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.313 20:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.571 20:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:01:N2NmODBlN2Q0MWIwNjVjNzc1Mzc2ZjE5YTBiYjNkMjOqK9Z2: --dhchap-ctrl-secret DHHC-1:02:NTVkMGI3M2Q0MGJiMDk1YTgyMjQyNTI0MWQ2NzYyNzAzODk3OTMxYmUxZjU3ZDY3F+CkRA==: 00:12:32.137 20:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.137 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.137 20:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:32.137 20:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.137 20:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.138 20:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.138 20:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:32.138 20:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:32.138 20:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:32.396 20:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:12:32.396 20:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:32.396 20:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:32.396 20:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:32.396 20:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:32.396 20:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.396 20:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.396 20:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.396 20:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.396 20:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.396 20:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.396 20:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.962 00:12:32.962 20:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:32.962 20:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:32.962 20:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.220 20:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.220 20:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.220 20:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.220 20:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.220 20:28:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.220 20:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:33.220 { 00:12:33.220 "auth": { 00:12:33.220 "dhgroup": "null", 00:12:33.220 "digest": "sha384", 00:12:33.220 "state": "completed" 00:12:33.220 }, 00:12:33.220 "cntlid": 53, 00:12:33.220 "listen_address": { 00:12:33.220 "adrfam": "IPv4", 00:12:33.220 "traddr": "10.0.0.2", 00:12:33.220 "trsvcid": "4420", 00:12:33.220 "trtype": "TCP" 00:12:33.220 }, 00:12:33.220 "peer_address": { 00:12:33.220 "adrfam": "IPv4", 00:12:33.220 "traddr": "10.0.0.1", 00:12:33.220 "trsvcid": "46452", 00:12:33.220 "trtype": "TCP" 00:12:33.220 }, 00:12:33.220 "qid": 0, 00:12:33.220 "state": "enabled", 00:12:33.220 "thread": "nvmf_tgt_poll_group_000" 00:12:33.220 } 00:12:33.220 ]' 00:12:33.220 20:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:33.220 20:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:33.220 20:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:33.220 20:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:33.220 20:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:33.220 20:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.220 20:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.220 20:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.478 20:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:02:Y2Y2YTVjMTA5ZmNkOTBkNjFkZDU5MDQxMWM2NGZhNmM5YjRlY2NlZmJlZmYwYzgwuDpLrg==: --dhchap-ctrl-secret DHHC-1:01:MDViY2Q2MTEyMDI3MmVhYjRmOGE2OGE0YzY0MmIwMzDXbShd: 00:12:34.412 20:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.413 20:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:34.413 20:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.413 20:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.413 20:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.413 20:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:34.413 20:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:34.413 20:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:34.413 20:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:12:34.413 20:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:34.413 20:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:34.413 20:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:34.413 20:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:34.413 20:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.413 20:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key3 00:12:34.413 20:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.413 20:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.413 20:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.413 20:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:34.413 20:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:34.980 00:12:34.980 20:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:34.980 20:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.980 20:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:35.238 20:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.238 20:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.238 20:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.238 20:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.238 20:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.238 20:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:35.238 { 00:12:35.238 "auth": { 00:12:35.238 "dhgroup": "null", 00:12:35.238 "digest": "sha384", 00:12:35.238 "state": "completed" 00:12:35.238 }, 00:12:35.238 "cntlid": 55, 00:12:35.238 "listen_address": { 00:12:35.238 "adrfam": "IPv4", 00:12:35.238 "traddr": "10.0.0.2", 00:12:35.238 "trsvcid": "4420", 00:12:35.238 "trtype": "TCP" 00:12:35.238 }, 00:12:35.238 "peer_address": { 00:12:35.238 "adrfam": "IPv4", 00:12:35.238 "traddr": "10.0.0.1", 00:12:35.238 "trsvcid": "46468", 00:12:35.238 "trtype": "TCP" 00:12:35.238 }, 00:12:35.238 "qid": 0, 00:12:35.238 "state": "enabled", 00:12:35.238 "thread": "nvmf_tgt_poll_group_000" 00:12:35.238 } 00:12:35.238 ]' 00:12:35.238 20:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:35.238 20:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:35.238 20:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:35.238 20:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:35.238 20:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:35.238 20:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.238 20:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.238 20:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.803 20:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:03:NzU4MDU0ZjNlNDZkZGJhYmE5MTc1MzczOTNjODRmNmNlNDE3Y2FhY2NmZjRkNTdhODc4YzRmMzRiMTc5OTY2M/hKznc=: 00:12:36.370 20:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.370 20:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:36.370 20:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.370 20:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.370 20:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.370 20:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:36.370 20:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:36.370 20:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:36.370 20:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:36.629 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:12:36.629 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:36.629 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:36.629 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:36.629 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:36.629 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.629 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.629 20:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.629 20:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.629 20:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.629 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.629 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.887 00:12:36.887 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:36.887 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:36.887 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.456 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.456 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.456 20:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.456 20:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.456 20:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.456 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:37.456 { 00:12:37.456 "auth": { 00:12:37.456 "dhgroup": "ffdhe2048", 00:12:37.456 "digest": "sha384", 00:12:37.456 "state": "completed" 00:12:37.456 }, 00:12:37.456 "cntlid": 57, 00:12:37.456 "listen_address": { 00:12:37.456 "adrfam": "IPv4", 00:12:37.456 "traddr": "10.0.0.2", 00:12:37.456 "trsvcid": "4420", 00:12:37.456 "trtype": "TCP" 00:12:37.456 }, 00:12:37.456 "peer_address": { 00:12:37.456 "adrfam": "IPv4", 00:12:37.456 "traddr": "10.0.0.1", 00:12:37.456 "trsvcid": "46476", 00:12:37.456 "trtype": "TCP" 00:12:37.456 }, 00:12:37.456 "qid": 0, 00:12:37.456 "state": "enabled", 00:12:37.456 "thread": "nvmf_tgt_poll_group_000" 00:12:37.456 } 00:12:37.456 ]' 00:12:37.456 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:37.456 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:37.456 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:37.456 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:37.456 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:37.456 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.456 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.456 20:28:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.716 20:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:00:M2E3ZWFjNTQ1Y2M5OTJlYWQwYmEwNDI2Y2MyM2JiZDk5ZmUzZGE1M2EyOWZlNzBix8QYQA==: --dhchap-ctrl-secret DHHC-1:03:NWU5ZjVhY2NmYzczYmZlMGMyNDQ2YjAxNWE3M2U3NzMxNjU3ZjczYTQ0MmU4YjUxNjY2MmQzNDdiNDllZDIwOIo4Z0U=: 00:12:38.652 20:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.652 20:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:38.652 20:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.652 20:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.652 20:28:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.652 20:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:38.652 20:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:38.652 20:28:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:38.911 20:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:12:38.911 20:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:38.911 20:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:38.911 20:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:38.911 20:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:38.911 20:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.911 20:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.911 20:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.911 20:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.911 20:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.911 20:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.911 20:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.170 00:12:39.170 20:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:39.170 20:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:39.170 20:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.428 20:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.428 20:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.428 20:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.428 20:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.428 20:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.428 20:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:39.428 { 00:12:39.428 "auth": { 00:12:39.428 "dhgroup": "ffdhe2048", 00:12:39.428 "digest": "sha384", 00:12:39.428 "state": "completed" 00:12:39.428 }, 00:12:39.428 "cntlid": 59, 00:12:39.428 "listen_address": { 00:12:39.428 "adrfam": "IPv4", 00:12:39.428 "traddr": "10.0.0.2", 00:12:39.428 "trsvcid": "4420", 00:12:39.428 "trtype": "TCP" 00:12:39.428 }, 00:12:39.428 "peer_address": { 00:12:39.428 "adrfam": "IPv4", 00:12:39.428 "traddr": "10.0.0.1", 00:12:39.428 "trsvcid": "52286", 00:12:39.428 "trtype": "TCP" 00:12:39.428 }, 00:12:39.428 "qid": 0, 00:12:39.428 "state": "enabled", 00:12:39.428 "thread": "nvmf_tgt_poll_group_000" 00:12:39.428 } 00:12:39.428 ]' 00:12:39.428 20:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:39.686 20:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:39.686 20:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:39.686 20:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:39.686 20:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:39.686 20:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.686 20:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.686 20:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.946 20:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:01:N2NmODBlN2Q0MWIwNjVjNzc1Mzc2ZjE5YTBiYjNkMjOqK9Z2: --dhchap-ctrl-secret DHHC-1:02:NTVkMGI3M2Q0MGJiMDk1YTgyMjQyNTI0MWQ2NzYyNzAzODk3OTMxYmUxZjU3ZDY3F+CkRA==: 00:12:40.513 20:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.513 20:29:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:40.513 20:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.513 20:29:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.514 20:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.514 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:40.514 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:40.514 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:40.772 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:12:40.772 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:40.772 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:40.772 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:40.772 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:40.772 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.772 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.772 20:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.772 20:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.772 20:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.772 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.772 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.340 00:12:41.340 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:41.340 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.340 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:41.599 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.599 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.599 20:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.599 20:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.599 20:29:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.599 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:41.599 { 00:12:41.599 "auth": { 00:12:41.599 "dhgroup": "ffdhe2048", 00:12:41.599 "digest": "sha384", 00:12:41.599 "state": "completed" 00:12:41.599 }, 00:12:41.599 "cntlid": 61, 00:12:41.599 "listen_address": { 00:12:41.599 "adrfam": "IPv4", 00:12:41.599 "traddr": "10.0.0.2", 00:12:41.599 "trsvcid": "4420", 00:12:41.599 "trtype": "TCP" 00:12:41.599 }, 00:12:41.599 "peer_address": { 00:12:41.599 "adrfam": "IPv4", 00:12:41.599 "traddr": "10.0.0.1", 00:12:41.599 "trsvcid": "52312", 00:12:41.599 "trtype": "TCP" 00:12:41.599 }, 00:12:41.599 "qid": 0, 00:12:41.599 "state": "enabled", 00:12:41.599 "thread": "nvmf_tgt_poll_group_000" 00:12:41.599 } 00:12:41.599 ]' 00:12:41.599 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:41.599 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:41.599 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:41.599 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:41.599 20:29:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:41.599 20:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.599 20:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.599 20:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.166 20:29:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:02:Y2Y2YTVjMTA5ZmNkOTBkNjFkZDU5MDQxMWM2NGZhNmM5YjRlY2NlZmJlZmYwYzgwuDpLrg==: --dhchap-ctrl-secret DHHC-1:01:MDViY2Q2MTEyMDI3MmVhYjRmOGE2OGE0YzY0MmIwMzDXbShd: 00:12:42.732 20:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.732 20:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:42.732 20:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.732 20:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.732 20:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.732 20:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:42.732 20:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:42.732 20:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:42.990 20:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:12:42.990 20:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:42.990 20:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:42.990 20:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:42.990 20:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:42.990 20:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.991 20:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key3 00:12:42.991 20:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.991 20:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.991 20:29:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.991 20:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:42.991 20:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:43.251 00:12:43.251 20:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:43.251 20:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.251 20:29:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:43.509 20:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.509 20:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.509 20:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.509 20:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.768 20:29:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.768 20:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:43.768 { 00:12:43.768 "auth": { 00:12:43.768 "dhgroup": "ffdhe2048", 00:12:43.768 "digest": "sha384", 00:12:43.768 "state": "completed" 00:12:43.768 }, 00:12:43.768 "cntlid": 63, 00:12:43.768 "listen_address": { 00:12:43.768 "adrfam": "IPv4", 00:12:43.768 "traddr": "10.0.0.2", 00:12:43.768 "trsvcid": "4420", 00:12:43.768 "trtype": "TCP" 00:12:43.768 }, 00:12:43.768 "peer_address": { 00:12:43.768 "adrfam": "IPv4", 00:12:43.768 "traddr": "10.0.0.1", 00:12:43.768 "trsvcid": "52328", 00:12:43.768 "trtype": "TCP" 00:12:43.768 }, 00:12:43.768 "qid": 0, 00:12:43.768 "state": "enabled", 00:12:43.768 "thread": "nvmf_tgt_poll_group_000" 00:12:43.768 } 00:12:43.768 ]' 00:12:43.768 20:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:43.768 20:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:43.768 20:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:43.768 20:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:43.768 20:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:43.768 20:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.768 20:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.768 20:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.026 20:29:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:03:NzU4MDU0ZjNlNDZkZGJhYmE5MTc1MzczOTNjODRmNmNlNDE3Y2FhY2NmZjRkNTdhODc4YzRmMzRiMTc5OTY2M/hKznc=: 00:12:44.594 20:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.594 20:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:44.594 20:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.594 20:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.594 20:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.594 20:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:44.594 20:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:44.594 20:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:44.594 20:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:45.162 20:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:12:45.162 20:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:45.162 20:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:45.162 20:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:45.162 20:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:45.162 20:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.162 20:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.162 20:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.162 20:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.162 20:29:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.162 20:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.162 20:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.422 00:12:45.422 20:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:45.422 20:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.422 20:29:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.680 20:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.680 20:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.680 20:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.680 20:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.680 20:29:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.680 20:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:45.680 { 00:12:45.680 "auth": { 00:12:45.680 "dhgroup": "ffdhe3072", 00:12:45.680 "digest": "sha384", 00:12:45.680 "state": "completed" 00:12:45.680 }, 00:12:45.680 "cntlid": 65, 00:12:45.680 "listen_address": { 00:12:45.680 "adrfam": "IPv4", 00:12:45.680 "traddr": "10.0.0.2", 00:12:45.680 "trsvcid": "4420", 00:12:45.680 "trtype": "TCP" 00:12:45.680 }, 00:12:45.680 "peer_address": { 00:12:45.680 "adrfam": "IPv4", 00:12:45.680 "traddr": "10.0.0.1", 00:12:45.680 "trsvcid": "52358", 00:12:45.680 "trtype": "TCP" 00:12:45.680 }, 00:12:45.680 "qid": 0, 00:12:45.680 "state": "enabled", 00:12:45.680 "thread": "nvmf_tgt_poll_group_000" 00:12:45.680 } 00:12:45.680 ]' 00:12:45.680 20:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:45.680 20:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:45.680 20:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.680 20:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:45.680 20:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.938 20:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.938 20:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.938 20:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.196 20:29:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:00:M2E3ZWFjNTQ1Y2M5OTJlYWQwYmEwNDI2Y2MyM2JiZDk5ZmUzZGE1M2EyOWZlNzBix8QYQA==: --dhchap-ctrl-secret DHHC-1:03:NWU5ZjVhY2NmYzczYmZlMGMyNDQ2YjAxNWE3M2U3NzMxNjU3ZjczYTQ0MmU4YjUxNjY2MmQzNDdiNDllZDIwOIo4Z0U=: 00:12:47.161 20:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.161 20:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:47.161 20:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.161 20:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.161 20:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.161 20:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:47.161 20:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:47.161 20:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:47.161 20:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:12:47.161 20:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:47.161 20:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:47.161 20:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:47.161 20:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:47.161 20:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.161 20:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.161 20:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.161 20:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.161 20:29:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.161 20:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.161 20:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.726 00:12:47.726 20:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:47.726 20:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.726 20:29:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:47.984 20:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.984 20:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.984 20:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.984 20:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.984 20:29:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.984 20:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:47.984 { 00:12:47.984 "auth": { 00:12:47.984 "dhgroup": "ffdhe3072", 00:12:47.984 "digest": "sha384", 00:12:47.984 "state": "completed" 00:12:47.984 }, 00:12:47.984 "cntlid": 67, 00:12:47.984 "listen_address": { 00:12:47.984 "adrfam": "IPv4", 00:12:47.984 "traddr": "10.0.0.2", 00:12:47.984 "trsvcid": "4420", 00:12:47.984 "trtype": "TCP" 00:12:47.984 }, 00:12:47.984 "peer_address": { 00:12:47.984 "adrfam": "IPv4", 00:12:47.984 "traddr": "10.0.0.1", 00:12:47.984 "trsvcid": "52388", 00:12:47.984 "trtype": "TCP" 00:12:47.984 }, 00:12:47.984 "qid": 0, 00:12:47.984 "state": "enabled", 00:12:47.984 "thread": "nvmf_tgt_poll_group_000" 00:12:47.984 } 00:12:47.984 ]' 00:12:47.984 20:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:47.984 20:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:47.984 20:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:47.984 20:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:47.984 20:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:47.984 20:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.984 20:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.984 20:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.241 20:29:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:01:N2NmODBlN2Q0MWIwNjVjNzc1Mzc2ZjE5YTBiYjNkMjOqK9Z2: --dhchap-ctrl-secret DHHC-1:02:NTVkMGI3M2Q0MGJiMDk1YTgyMjQyNTI0MWQ2NzYyNzAzODk3OTMxYmUxZjU3ZDY3F+CkRA==: 00:12:49.196 20:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.196 20:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:49.196 20:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.196 20:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.196 20:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.196 20:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:49.196 20:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:49.196 20:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:49.196 20:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:12:49.196 20:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:49.196 20:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:49.196 20:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:49.196 20:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:49.196 20:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.196 20:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.196 20:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.196 20:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.454 20:29:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.454 20:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.454 20:29:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.712 00:12:49.712 20:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:49.712 20:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:49.712 20:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.970 20:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.970 20:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.970 20:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.970 20:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.970 20:29:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.970 20:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:49.970 { 00:12:49.970 "auth": { 00:12:49.970 "dhgroup": "ffdhe3072", 00:12:49.970 "digest": "sha384", 00:12:49.970 "state": "completed" 00:12:49.970 }, 00:12:49.970 "cntlid": 69, 00:12:49.970 "listen_address": { 00:12:49.970 "adrfam": "IPv4", 00:12:49.970 "traddr": "10.0.0.2", 00:12:49.970 "trsvcid": "4420", 00:12:49.970 "trtype": "TCP" 00:12:49.970 }, 00:12:49.970 "peer_address": { 00:12:49.970 "adrfam": "IPv4", 00:12:49.970 "traddr": "10.0.0.1", 00:12:49.970 "trsvcid": "44572", 00:12:49.970 "trtype": "TCP" 00:12:49.970 }, 00:12:49.970 "qid": 0, 00:12:49.970 "state": "enabled", 00:12:49.970 "thread": "nvmf_tgt_poll_group_000" 00:12:49.970 } 00:12:49.970 ]' 00:12:49.970 20:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:49.970 20:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:49.970 20:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:49.970 20:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:50.227 20:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:50.227 20:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.227 20:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.227 20:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.485 20:29:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:02:Y2Y2YTVjMTA5ZmNkOTBkNjFkZDU5MDQxMWM2NGZhNmM5YjRlY2NlZmJlZmYwYzgwuDpLrg==: --dhchap-ctrl-secret DHHC-1:01:MDViY2Q2MTEyMDI3MmVhYjRmOGE2OGE0YzY0MmIwMzDXbShd: 00:12:51.419 20:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.419 20:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:51.419 20:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.419 20:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.419 20:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.419 20:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:51.419 20:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:51.419 20:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:51.419 20:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:12:51.419 20:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:51.419 20:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:51.419 20:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:51.419 20:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:51.419 20:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.419 20:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key3 00:12:51.419 20:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.419 20:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.419 20:29:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.419 20:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:51.419 20:29:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:51.984 00:12:51.984 20:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:51.984 20:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.984 20:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:52.241 20:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.241 20:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.241 20:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.241 20:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.241 20:29:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.241 20:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:52.241 { 00:12:52.241 "auth": { 00:12:52.241 "dhgroup": "ffdhe3072", 00:12:52.241 "digest": "sha384", 00:12:52.241 "state": "completed" 00:12:52.241 }, 00:12:52.241 "cntlid": 71, 00:12:52.241 "listen_address": { 00:12:52.241 "adrfam": "IPv4", 00:12:52.241 "traddr": "10.0.0.2", 00:12:52.241 "trsvcid": "4420", 00:12:52.241 "trtype": "TCP" 00:12:52.241 }, 00:12:52.241 "peer_address": { 00:12:52.241 "adrfam": "IPv4", 00:12:52.241 "traddr": "10.0.0.1", 00:12:52.241 "trsvcid": "44584", 00:12:52.241 "trtype": "TCP" 00:12:52.241 }, 00:12:52.241 "qid": 0, 00:12:52.241 "state": "enabled", 00:12:52.241 "thread": "nvmf_tgt_poll_group_000" 00:12:52.241 } 00:12:52.241 ]' 00:12:52.241 20:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:52.241 20:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:52.241 20:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:52.241 20:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:52.241 20:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:52.241 20:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.241 20:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.241 20:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.508 20:29:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:03:NzU4MDU0ZjNlNDZkZGJhYmE5MTc1MzczOTNjODRmNmNlNDE3Y2FhY2NmZjRkNTdhODc4YzRmMzRiMTc5OTY2M/hKznc=: 00:12:53.444 20:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.444 20:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:53.444 20:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.444 20:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.444 20:29:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.444 20:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:53.444 20:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:53.444 20:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:53.444 20:29:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:53.702 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:12:53.702 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:53.702 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:53.702 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:53.702 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:53.702 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.702 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.702 20:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.702 20:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.702 20:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.702 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.702 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.960 00:12:53.960 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:53.960 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:53.960 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.528 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.528 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.528 20:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.528 20:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.528 20:29:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.528 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:54.528 { 00:12:54.528 "auth": { 00:12:54.528 "dhgroup": "ffdhe4096", 00:12:54.528 "digest": "sha384", 00:12:54.528 "state": "completed" 00:12:54.528 }, 00:12:54.528 "cntlid": 73, 00:12:54.528 "listen_address": { 00:12:54.528 "adrfam": "IPv4", 00:12:54.528 "traddr": "10.0.0.2", 00:12:54.528 "trsvcid": "4420", 00:12:54.528 "trtype": "TCP" 00:12:54.528 }, 00:12:54.528 "peer_address": { 00:12:54.528 "adrfam": "IPv4", 00:12:54.528 "traddr": "10.0.0.1", 00:12:54.528 "trsvcid": "44596", 00:12:54.528 "trtype": "TCP" 00:12:54.528 }, 00:12:54.528 "qid": 0, 00:12:54.528 "state": "enabled", 00:12:54.528 "thread": "nvmf_tgt_poll_group_000" 00:12:54.528 } 00:12:54.528 ]' 00:12:54.528 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:54.528 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:54.528 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:54.528 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:54.528 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:54.528 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.528 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.528 20:29:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.785 20:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:00:M2E3ZWFjNTQ1Y2M5OTJlYWQwYmEwNDI2Y2MyM2JiZDk5ZmUzZGE1M2EyOWZlNzBix8QYQA==: --dhchap-ctrl-secret DHHC-1:03:NWU5ZjVhY2NmYzczYmZlMGMyNDQ2YjAxNWE3M2U3NzMxNjU3ZjczYTQ0MmU4YjUxNjY2MmQzNDdiNDllZDIwOIo4Z0U=: 00:12:55.720 20:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.720 20:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:55.720 20:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.720 20:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.720 20:29:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.720 20:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:55.720 20:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:55.720 20:29:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:55.720 20:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:12:55.720 20:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:55.720 20:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:55.720 20:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:55.720 20:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:55.720 20:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.720 20:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.720 20:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.720 20:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.978 20:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.978 20:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.978 20:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.237 00:12:56.237 20:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:56.237 20:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:56.237 20:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.495 20:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.495 20:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.495 20:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.495 20:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.495 20:29:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.495 20:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:56.495 { 00:12:56.495 "auth": { 00:12:56.495 "dhgroup": "ffdhe4096", 00:12:56.495 "digest": "sha384", 00:12:56.495 "state": "completed" 00:12:56.495 }, 00:12:56.495 "cntlid": 75, 00:12:56.495 "listen_address": { 00:12:56.495 "adrfam": "IPv4", 00:12:56.495 "traddr": "10.0.0.2", 00:12:56.495 "trsvcid": "4420", 00:12:56.495 "trtype": "TCP" 00:12:56.495 }, 00:12:56.495 "peer_address": { 00:12:56.495 "adrfam": "IPv4", 00:12:56.495 "traddr": "10.0.0.1", 00:12:56.495 "trsvcid": "44620", 00:12:56.495 "trtype": "TCP" 00:12:56.495 }, 00:12:56.495 "qid": 0, 00:12:56.495 "state": "enabled", 00:12:56.495 "thread": "nvmf_tgt_poll_group_000" 00:12:56.495 } 00:12:56.495 ]' 00:12:56.495 20:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:56.495 20:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:56.495 20:29:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:56.752 20:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:56.752 20:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:56.752 20:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.752 20:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.752 20:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.009 20:29:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:01:N2NmODBlN2Q0MWIwNjVjNzc1Mzc2ZjE5YTBiYjNkMjOqK9Z2: --dhchap-ctrl-secret DHHC-1:02:NTVkMGI3M2Q0MGJiMDk1YTgyMjQyNTI0MWQ2NzYyNzAzODk3OTMxYmUxZjU3ZDY3F+CkRA==: 00:12:57.574 20:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.574 20:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:57.574 20:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.575 20:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.575 20:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.575 20:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:57.575 20:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:57.575 20:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:58.139 20:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:12:58.139 20:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:58.139 20:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:58.139 20:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:58.139 20:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:58.139 20:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.139 20:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.139 20:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.139 20:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.139 20:29:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.139 20:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.139 20:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.396 00:12:58.396 20:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:58.396 20:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.396 20:29:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:58.654 20:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.654 20:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.654 20:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.654 20:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.654 20:29:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.654 20:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:58.654 { 00:12:58.654 "auth": { 00:12:58.654 "dhgroup": "ffdhe4096", 00:12:58.654 "digest": "sha384", 00:12:58.654 "state": "completed" 00:12:58.654 }, 00:12:58.654 "cntlid": 77, 00:12:58.654 "listen_address": { 00:12:58.654 "adrfam": "IPv4", 00:12:58.654 "traddr": "10.0.0.2", 00:12:58.654 "trsvcid": "4420", 00:12:58.654 "trtype": "TCP" 00:12:58.654 }, 00:12:58.654 "peer_address": { 00:12:58.654 "adrfam": "IPv4", 00:12:58.654 "traddr": "10.0.0.1", 00:12:58.654 "trsvcid": "38460", 00:12:58.654 "trtype": "TCP" 00:12:58.654 }, 00:12:58.654 "qid": 0, 00:12:58.654 "state": "enabled", 00:12:58.654 "thread": "nvmf_tgt_poll_group_000" 00:12:58.654 } 00:12:58.654 ]' 00:12:58.654 20:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:58.654 20:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:58.654 20:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:58.912 20:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:58.912 20:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:58.912 20:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.912 20:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.912 20:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.169 20:29:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:02:Y2Y2YTVjMTA5ZmNkOTBkNjFkZDU5MDQxMWM2NGZhNmM5YjRlY2NlZmJlZmYwYzgwuDpLrg==: --dhchap-ctrl-secret DHHC-1:01:MDViY2Q2MTEyMDI3MmVhYjRmOGE2OGE0YzY0MmIwMzDXbShd: 00:12:59.734 20:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.734 20:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:12:59.734 20:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.734 20:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.734 20:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.734 20:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:59.734 20:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:59.734 20:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:00.296 20:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:13:00.296 20:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:00.296 20:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:00.296 20:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:00.296 20:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:00.296 20:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.296 20:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key3 00:13:00.296 20:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.296 20:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.296 20:29:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.296 20:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:00.296 20:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:00.552 00:13:00.552 20:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:00.552 20:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:00.552 20:29:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.810 20:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.810 20:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.810 20:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.810 20:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.810 20:29:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.810 20:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:00.810 { 00:13:00.810 "auth": { 00:13:00.810 "dhgroup": "ffdhe4096", 00:13:00.810 "digest": "sha384", 00:13:00.810 "state": "completed" 00:13:00.810 }, 00:13:00.810 "cntlid": 79, 00:13:00.810 "listen_address": { 00:13:00.810 "adrfam": "IPv4", 00:13:00.810 "traddr": "10.0.0.2", 00:13:00.810 "trsvcid": "4420", 00:13:00.810 "trtype": "TCP" 00:13:00.810 }, 00:13:00.810 "peer_address": { 00:13:00.810 "adrfam": "IPv4", 00:13:00.810 "traddr": "10.0.0.1", 00:13:00.810 "trsvcid": "38478", 00:13:00.810 "trtype": "TCP" 00:13:00.810 }, 00:13:00.810 "qid": 0, 00:13:00.810 "state": "enabled", 00:13:00.810 "thread": "nvmf_tgt_poll_group_000" 00:13:00.810 } 00:13:00.810 ]' 00:13:00.810 20:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:01.066 20:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:01.066 20:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:01.066 20:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:01.066 20:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:01.066 20:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.066 20:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.066 20:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.322 20:29:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:03:NzU4MDU0ZjNlNDZkZGJhYmE5MTc1MzczOTNjODRmNmNlNDE3Y2FhY2NmZjRkNTdhODc4YzRmMzRiMTc5OTY2M/hKznc=: 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.253 20:29:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.816 00:13:02.816 20:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:02.816 20:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.816 20:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:03.101 20:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.101 20:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.101 20:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.101 20:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.101 20:29:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.101 20:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:03.101 { 00:13:03.101 "auth": { 00:13:03.101 "dhgroup": "ffdhe6144", 00:13:03.101 "digest": "sha384", 00:13:03.101 "state": "completed" 00:13:03.101 }, 00:13:03.101 "cntlid": 81, 00:13:03.101 "listen_address": { 00:13:03.101 "adrfam": "IPv4", 00:13:03.101 "traddr": "10.0.0.2", 00:13:03.101 "trsvcid": "4420", 00:13:03.101 "trtype": "TCP" 00:13:03.101 }, 00:13:03.101 "peer_address": { 00:13:03.101 "adrfam": "IPv4", 00:13:03.101 "traddr": "10.0.0.1", 00:13:03.101 "trsvcid": "38514", 00:13:03.101 "trtype": "TCP" 00:13:03.101 }, 00:13:03.101 "qid": 0, 00:13:03.101 "state": "enabled", 00:13:03.101 "thread": "nvmf_tgt_poll_group_000" 00:13:03.101 } 00:13:03.101 ]' 00:13:03.101 20:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:03.101 20:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:03.101 20:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:03.101 20:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:03.101 20:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:03.359 20:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.359 20:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.359 20:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.617 20:29:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:00:M2E3ZWFjNTQ1Y2M5OTJlYWQwYmEwNDI2Y2MyM2JiZDk5ZmUzZGE1M2EyOWZlNzBix8QYQA==: --dhchap-ctrl-secret DHHC-1:03:NWU5ZjVhY2NmYzczYmZlMGMyNDQ2YjAxNWE3M2U3NzMxNjU3ZjczYTQ0MmU4YjUxNjY2MmQzNDdiNDllZDIwOIo4Z0U=: 00:13:04.191 20:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.191 20:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:04.191 20:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.191 20:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.191 20:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.191 20:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:04.191 20:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:04.191 20:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:04.770 20:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:13:04.770 20:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:04.770 20:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:04.770 20:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:04.770 20:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:04.770 20:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.770 20:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.770 20:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.770 20:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.770 20:29:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.770 20:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.770 20:29:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.027 00:13:05.027 20:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:05.027 20:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:05.027 20:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.284 20:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.284 20:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.284 20:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.284 20:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.284 20:29:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.284 20:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:05.284 { 00:13:05.284 "auth": { 00:13:05.284 "dhgroup": "ffdhe6144", 00:13:05.284 "digest": "sha384", 00:13:05.284 "state": "completed" 00:13:05.284 }, 00:13:05.284 "cntlid": 83, 00:13:05.284 "listen_address": { 00:13:05.284 "adrfam": "IPv4", 00:13:05.284 "traddr": "10.0.0.2", 00:13:05.284 "trsvcid": "4420", 00:13:05.284 "trtype": "TCP" 00:13:05.284 }, 00:13:05.284 "peer_address": { 00:13:05.284 "adrfam": "IPv4", 00:13:05.284 "traddr": "10.0.0.1", 00:13:05.284 "trsvcid": "38544", 00:13:05.285 "trtype": "TCP" 00:13:05.285 }, 00:13:05.285 "qid": 0, 00:13:05.285 "state": "enabled", 00:13:05.285 "thread": "nvmf_tgt_poll_group_000" 00:13:05.285 } 00:13:05.285 ]' 00:13:05.285 20:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:05.542 20:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:05.542 20:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:05.542 20:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:05.542 20:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:05.542 20:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.542 20:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.542 20:29:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.800 20:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:01:N2NmODBlN2Q0MWIwNjVjNzc1Mzc2ZjE5YTBiYjNkMjOqK9Z2: --dhchap-ctrl-secret DHHC-1:02:NTVkMGI3M2Q0MGJiMDk1YTgyMjQyNTI0MWQ2NzYyNzAzODk3OTMxYmUxZjU3ZDY3F+CkRA==: 00:13:06.364 20:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.622 20:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:06.622 20:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.622 20:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.622 20:29:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.622 20:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:06.622 20:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:06.622 20:29:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:06.880 20:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:13:06.880 20:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:06.880 20:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:06.880 20:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:06.880 20:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:06.880 20:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.880 20:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.880 20:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.880 20:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.880 20:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.880 20:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.880 20:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.137 00:13:07.394 20:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:07.394 20:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:07.394 20:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.651 20:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.651 20:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.651 20:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.651 20:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.651 20:29:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.651 20:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:07.651 { 00:13:07.651 "auth": { 00:13:07.651 "dhgroup": "ffdhe6144", 00:13:07.651 "digest": "sha384", 00:13:07.651 "state": "completed" 00:13:07.651 }, 00:13:07.651 "cntlid": 85, 00:13:07.651 "listen_address": { 00:13:07.651 "adrfam": "IPv4", 00:13:07.651 "traddr": "10.0.0.2", 00:13:07.651 "trsvcid": "4420", 00:13:07.651 "trtype": "TCP" 00:13:07.651 }, 00:13:07.651 "peer_address": { 00:13:07.651 "adrfam": "IPv4", 00:13:07.651 "traddr": "10.0.0.1", 00:13:07.651 "trsvcid": "38556", 00:13:07.651 "trtype": "TCP" 00:13:07.651 }, 00:13:07.651 "qid": 0, 00:13:07.651 "state": "enabled", 00:13:07.651 "thread": "nvmf_tgt_poll_group_000" 00:13:07.651 } 00:13:07.651 ]' 00:13:07.651 20:29:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:07.651 20:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:07.651 20:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:07.651 20:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:07.651 20:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:07.651 20:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.651 20:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.651 20:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.215 20:29:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:02:Y2Y2YTVjMTA5ZmNkOTBkNjFkZDU5MDQxMWM2NGZhNmM5YjRlY2NlZmJlZmYwYzgwuDpLrg==: --dhchap-ctrl-secret DHHC-1:01:MDViY2Q2MTEyMDI3MmVhYjRmOGE2OGE0YzY0MmIwMzDXbShd: 00:13:08.780 20:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.780 20:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:08.780 20:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.780 20:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.780 20:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.780 20:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:08.780 20:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:08.780 20:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:09.037 20:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:13:09.037 20:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:09.037 20:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:09.037 20:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:09.037 20:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:09.037 20:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.037 20:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key3 00:13:09.037 20:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.037 20:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.037 20:29:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.037 20:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:09.037 20:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:09.602 00:13:09.602 20:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:09.602 20:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:09.602 20:29:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.860 20:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.860 20:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.860 20:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.860 20:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.860 20:29:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.860 20:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:09.860 { 00:13:09.860 "auth": { 00:13:09.860 "dhgroup": "ffdhe6144", 00:13:09.860 "digest": "sha384", 00:13:09.860 "state": "completed" 00:13:09.860 }, 00:13:09.860 "cntlid": 87, 00:13:09.860 "listen_address": { 00:13:09.860 "adrfam": "IPv4", 00:13:09.860 "traddr": "10.0.0.2", 00:13:09.860 "trsvcid": "4420", 00:13:09.860 "trtype": "TCP" 00:13:09.860 }, 00:13:09.860 "peer_address": { 00:13:09.860 "adrfam": "IPv4", 00:13:09.860 "traddr": "10.0.0.1", 00:13:09.860 "trsvcid": "36166", 00:13:09.860 "trtype": "TCP" 00:13:09.860 }, 00:13:09.860 "qid": 0, 00:13:09.860 "state": "enabled", 00:13:09.860 "thread": "nvmf_tgt_poll_group_000" 00:13:09.860 } 00:13:09.860 ]' 00:13:09.860 20:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:09.860 20:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:09.860 20:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:10.119 20:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:10.119 20:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:10.119 20:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.119 20:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.119 20:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.377 20:29:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:03:NzU4MDU0ZjNlNDZkZGJhYmE5MTc1MzczOTNjODRmNmNlNDE3Y2FhY2NmZjRkNTdhODc4YzRmMzRiMTc5OTY2M/hKznc=: 00:13:10.943 20:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.943 20:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:10.943 20:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.943 20:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.943 20:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.943 20:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:10.943 20:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:10.943 20:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:10.943 20:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:11.200 20:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:13:11.200 20:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:11.200 20:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:11.200 20:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:11.200 20:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:11.200 20:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.200 20:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.200 20:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.200 20:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.459 20:29:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.459 20:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:11.459 20:29:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.023 00:13:12.023 20:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:12.023 20:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:12.023 20:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.280 20:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.280 20:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.280 20:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.280 20:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.280 20:29:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.280 20:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:12.280 { 00:13:12.280 "auth": { 00:13:12.280 "dhgroup": "ffdhe8192", 00:13:12.280 "digest": "sha384", 00:13:12.280 "state": "completed" 00:13:12.280 }, 00:13:12.280 "cntlid": 89, 00:13:12.280 "listen_address": { 00:13:12.280 "adrfam": "IPv4", 00:13:12.280 "traddr": "10.0.0.2", 00:13:12.280 "trsvcid": "4420", 00:13:12.280 "trtype": "TCP" 00:13:12.280 }, 00:13:12.280 "peer_address": { 00:13:12.280 "adrfam": "IPv4", 00:13:12.280 "traddr": "10.0.0.1", 00:13:12.280 "trsvcid": "36188", 00:13:12.280 "trtype": "TCP" 00:13:12.280 }, 00:13:12.280 "qid": 0, 00:13:12.280 "state": "enabled", 00:13:12.280 "thread": "nvmf_tgt_poll_group_000" 00:13:12.280 } 00:13:12.280 ]' 00:13:12.280 20:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:12.280 20:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:12.280 20:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:12.280 20:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:12.280 20:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:12.538 20:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.538 20:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.538 20:29:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.796 20:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:00:M2E3ZWFjNTQ1Y2M5OTJlYWQwYmEwNDI2Y2MyM2JiZDk5ZmUzZGE1M2EyOWZlNzBix8QYQA==: --dhchap-ctrl-secret DHHC-1:03:NWU5ZjVhY2NmYzczYmZlMGMyNDQ2YjAxNWE3M2U3NzMxNjU3ZjczYTQ0MmU4YjUxNjY2MmQzNDdiNDllZDIwOIo4Z0U=: 00:13:13.361 20:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.361 20:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:13.361 20:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.361 20:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.361 20:29:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.361 20:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:13.361 20:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:13.361 20:29:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:13.928 20:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:13:13.928 20:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:13.928 20:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:13.928 20:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:13.928 20:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:13.928 20:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.928 20:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.928 20:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.928 20:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.929 20:29:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.929 20:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:13.929 20:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:14.495 00:13:14.495 20:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:14.495 20:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.495 20:29:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:14.754 20:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.754 20:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.754 20:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.754 20:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.754 20:29:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.754 20:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:14.754 { 00:13:14.754 "auth": { 00:13:14.754 "dhgroup": "ffdhe8192", 00:13:14.754 "digest": "sha384", 00:13:14.754 "state": "completed" 00:13:14.754 }, 00:13:14.754 "cntlid": 91, 00:13:14.754 "listen_address": { 00:13:14.754 "adrfam": "IPv4", 00:13:14.754 "traddr": "10.0.0.2", 00:13:14.754 "trsvcid": "4420", 00:13:14.754 "trtype": "TCP" 00:13:14.754 }, 00:13:14.754 "peer_address": { 00:13:14.754 "adrfam": "IPv4", 00:13:14.754 "traddr": "10.0.0.1", 00:13:14.754 "trsvcid": "36212", 00:13:14.754 "trtype": "TCP" 00:13:14.754 }, 00:13:14.754 "qid": 0, 00:13:14.754 "state": "enabled", 00:13:14.754 "thread": "nvmf_tgt_poll_group_000" 00:13:14.754 } 00:13:14.754 ]' 00:13:14.754 20:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:14.754 20:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:14.754 20:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:14.754 20:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:14.754 20:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:15.012 20:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.012 20:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.012 20:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.270 20:29:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:01:N2NmODBlN2Q0MWIwNjVjNzc1Mzc2ZjE5YTBiYjNkMjOqK9Z2: --dhchap-ctrl-secret DHHC-1:02:NTVkMGI3M2Q0MGJiMDk1YTgyMjQyNTI0MWQ2NzYyNzAzODk3OTMxYmUxZjU3ZDY3F+CkRA==: 00:13:15.838 20:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.838 20:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:15.838 20:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.838 20:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.838 20:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.838 20:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:15.838 20:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:15.838 20:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:16.096 20:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:13:16.096 20:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:16.096 20:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:16.096 20:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:16.096 20:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:16.096 20:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.096 20:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.096 20:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.096 20:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.096 20:29:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.096 20:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.096 20:29:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:16.661 00:13:16.920 20:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:16.920 20:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:16.920 20:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.183 20:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.183 20:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.183 20:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:17.183 20:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.183 20:29:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:17.183 20:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:17.183 { 00:13:17.183 "auth": { 00:13:17.183 "dhgroup": "ffdhe8192", 00:13:17.183 "digest": "sha384", 00:13:17.183 "state": "completed" 00:13:17.183 }, 00:13:17.183 "cntlid": 93, 00:13:17.183 "listen_address": { 00:13:17.183 "adrfam": "IPv4", 00:13:17.183 "traddr": "10.0.0.2", 00:13:17.183 "trsvcid": "4420", 00:13:17.183 "trtype": "TCP" 00:13:17.183 }, 00:13:17.183 "peer_address": { 00:13:17.183 "adrfam": "IPv4", 00:13:17.183 "traddr": "10.0.0.1", 00:13:17.183 "trsvcid": "36238", 00:13:17.183 "trtype": "TCP" 00:13:17.183 }, 00:13:17.183 "qid": 0, 00:13:17.183 "state": "enabled", 00:13:17.183 "thread": "nvmf_tgt_poll_group_000" 00:13:17.183 } 00:13:17.183 ]' 00:13:17.183 20:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:17.183 20:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:17.183 20:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:17.183 20:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:17.183 20:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:17.183 20:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.183 20:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.183 20:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.440 20:29:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:02:Y2Y2YTVjMTA5ZmNkOTBkNjFkZDU5MDQxMWM2NGZhNmM5YjRlY2NlZmJlZmYwYzgwuDpLrg==: --dhchap-ctrl-secret DHHC-1:01:MDViY2Q2MTEyMDI3MmVhYjRmOGE2OGE0YzY0MmIwMzDXbShd: 00:13:18.373 20:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.373 20:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:18.373 20:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.373 20:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.373 20:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.373 20:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:18.373 20:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:18.373 20:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:18.631 20:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:13:18.631 20:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:18.631 20:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:13:18.631 20:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:18.631 20:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:18.631 20:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.631 20:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key3 00:13:18.631 20:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.631 20:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.631 20:29:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.631 20:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:18.631 20:29:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:19.196 00:13:19.196 20:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:19.196 20:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.196 20:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:19.454 20:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.454 20:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.454 20:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.454 20:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.454 20:29:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.454 20:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:19.454 { 00:13:19.454 "auth": { 00:13:19.454 "dhgroup": "ffdhe8192", 00:13:19.454 "digest": "sha384", 00:13:19.454 "state": "completed" 00:13:19.454 }, 00:13:19.454 "cntlid": 95, 00:13:19.454 "listen_address": { 00:13:19.454 "adrfam": "IPv4", 00:13:19.454 "traddr": "10.0.0.2", 00:13:19.454 "trsvcid": "4420", 00:13:19.454 "trtype": "TCP" 00:13:19.454 }, 00:13:19.454 "peer_address": { 00:13:19.454 "adrfam": "IPv4", 00:13:19.454 "traddr": "10.0.0.1", 00:13:19.454 "trsvcid": "39942", 00:13:19.454 "trtype": "TCP" 00:13:19.454 }, 00:13:19.454 "qid": 0, 00:13:19.454 "state": "enabled", 00:13:19.454 "thread": "nvmf_tgt_poll_group_000" 00:13:19.454 } 00:13:19.454 ]' 00:13:19.454 20:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:19.711 20:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:19.711 20:29:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:19.711 20:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:19.711 20:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:19.711 20:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.711 20:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.711 20:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.969 20:29:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:03:NzU4MDU0ZjNlNDZkZGJhYmE5MTc1MzczOTNjODRmNmNlNDE3Y2FhY2NmZjRkNTdhODc4YzRmMzRiMTc5OTY2M/hKznc=: 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:20.905 20:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.472 00:13:21.472 20:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:21.472 20:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.472 20:29:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:21.730 20:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.730 20:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.730 20:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.730 20:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.730 20:29:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.730 20:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:21.730 { 00:13:21.730 "auth": { 00:13:21.730 "dhgroup": "null", 00:13:21.730 "digest": "sha512", 00:13:21.730 "state": "completed" 00:13:21.730 }, 00:13:21.730 "cntlid": 97, 00:13:21.730 "listen_address": { 00:13:21.730 "adrfam": "IPv4", 00:13:21.730 "traddr": "10.0.0.2", 00:13:21.730 "trsvcid": "4420", 00:13:21.730 "trtype": "TCP" 00:13:21.730 }, 00:13:21.730 "peer_address": { 00:13:21.730 "adrfam": "IPv4", 00:13:21.730 "traddr": "10.0.0.1", 00:13:21.730 "trsvcid": "39966", 00:13:21.730 "trtype": "TCP" 00:13:21.730 }, 00:13:21.730 "qid": 0, 00:13:21.730 "state": "enabled", 00:13:21.730 "thread": "nvmf_tgt_poll_group_000" 00:13:21.730 } 00:13:21.730 ]' 00:13:21.730 20:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:21.730 20:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:21.730 20:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:21.730 20:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:21.730 20:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:21.730 20:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.730 20:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.730 20:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.987 20:29:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:00:M2E3ZWFjNTQ1Y2M5OTJlYWQwYmEwNDI2Y2MyM2JiZDk5ZmUzZGE1M2EyOWZlNzBix8QYQA==: --dhchap-ctrl-secret DHHC-1:03:NWU5ZjVhY2NmYzczYmZlMGMyNDQ2YjAxNWE3M2U3NzMxNjU3ZjczYTQ0MmU4YjUxNjY2MmQzNDdiNDllZDIwOIo4Z0U=: 00:13:22.919 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.920 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:22.920 20:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.920 20:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.920 20:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.920 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:22.920 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:22.920 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:22.920 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:13:22.920 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:22.920 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:22.920 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:22.920 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:22.920 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.920 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.920 20:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.920 20:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.920 20:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.920 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:22.920 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:23.485 00:13:23.485 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:23.485 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:23.485 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:23.485 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:23.485 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:23.485 20:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.485 20:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.485 20:29:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.485 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:23.485 { 00:13:23.485 "auth": { 00:13:23.485 "dhgroup": "null", 00:13:23.485 "digest": "sha512", 00:13:23.485 "state": "completed" 00:13:23.485 }, 00:13:23.485 "cntlid": 99, 00:13:23.485 "listen_address": { 00:13:23.485 "adrfam": "IPv4", 00:13:23.485 "traddr": "10.0.0.2", 00:13:23.485 "trsvcid": "4420", 00:13:23.485 "trtype": "TCP" 00:13:23.485 }, 00:13:23.485 "peer_address": { 00:13:23.485 "adrfam": "IPv4", 00:13:23.485 "traddr": "10.0.0.1", 00:13:23.485 "trsvcid": "39984", 00:13:23.485 "trtype": "TCP" 00:13:23.485 }, 00:13:23.485 "qid": 0, 00:13:23.485 "state": "enabled", 00:13:23.485 "thread": "nvmf_tgt_poll_group_000" 00:13:23.485 } 00:13:23.485 ]' 00:13:23.485 20:29:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:23.744 20:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:23.744 20:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:23.744 20:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:23.744 20:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:23.744 20:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.744 20:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.744 20:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.036 20:29:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:01:N2NmODBlN2Q0MWIwNjVjNzc1Mzc2ZjE5YTBiYjNkMjOqK9Z2: --dhchap-ctrl-secret DHHC-1:02:NTVkMGI3M2Q0MGJiMDk1YTgyMjQyNTI0MWQ2NzYyNzAzODk3OTMxYmUxZjU3ZDY3F+CkRA==: 00:13:24.612 20:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.612 20:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:24.612 20:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.612 20:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.612 20:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.612 20:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:24.612 20:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:24.868 20:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:24.868 20:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:13:24.868 20:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:24.868 20:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:24.868 20:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:24.868 20:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:24.868 20:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.868 20:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.868 20:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.868 20:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.126 20:29:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.126 20:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.126 20:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:25.383 00:13:25.383 20:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:25.383 20:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:25.383 20:29:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.640 20:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.640 20:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.640 20:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.640 20:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.640 20:29:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.640 20:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:25.640 { 00:13:25.640 "auth": { 00:13:25.640 "dhgroup": "null", 00:13:25.640 "digest": "sha512", 00:13:25.640 "state": "completed" 00:13:25.640 }, 00:13:25.640 "cntlid": 101, 00:13:25.640 "listen_address": { 00:13:25.640 "adrfam": "IPv4", 00:13:25.640 "traddr": "10.0.0.2", 00:13:25.640 "trsvcid": "4420", 00:13:25.640 "trtype": "TCP" 00:13:25.640 }, 00:13:25.640 "peer_address": { 00:13:25.640 "adrfam": "IPv4", 00:13:25.640 "traddr": "10.0.0.1", 00:13:25.640 "trsvcid": "40012", 00:13:25.640 "trtype": "TCP" 00:13:25.640 }, 00:13:25.640 "qid": 0, 00:13:25.640 "state": "enabled", 00:13:25.640 "thread": "nvmf_tgt_poll_group_000" 00:13:25.640 } 00:13:25.640 ]' 00:13:25.640 20:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:25.640 20:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:25.640 20:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:25.898 20:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:25.898 20:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:25.898 20:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.898 20:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.898 20:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.156 20:29:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:02:Y2Y2YTVjMTA5ZmNkOTBkNjFkZDU5MDQxMWM2NGZhNmM5YjRlY2NlZmJlZmYwYzgwuDpLrg==: --dhchap-ctrl-secret DHHC-1:01:MDViY2Q2MTEyMDI3MmVhYjRmOGE2OGE0YzY0MmIwMzDXbShd: 00:13:27.090 20:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.090 20:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:27.090 20:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.090 20:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.090 20:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.090 20:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:27.090 20:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:27.090 20:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:27.090 20:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:13:27.090 20:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:27.090 20:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:27.090 20:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:27.090 20:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:27.090 20:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.090 20:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key3 00:13:27.090 20:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.090 20:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.090 20:29:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.090 20:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.090 20:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:27.349 00:13:27.349 20:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:27.349 20:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:27.349 20:29:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.607 20:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.607 20:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.607 20:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:27.607 20:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.607 20:29:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:27.607 20:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:27.607 { 00:13:27.607 "auth": { 00:13:27.607 "dhgroup": "null", 00:13:27.607 "digest": "sha512", 00:13:27.607 "state": "completed" 00:13:27.607 }, 00:13:27.607 "cntlid": 103, 00:13:27.607 "listen_address": { 00:13:27.607 "adrfam": "IPv4", 00:13:27.607 "traddr": "10.0.0.2", 00:13:27.608 "trsvcid": "4420", 00:13:27.608 "trtype": "TCP" 00:13:27.608 }, 00:13:27.608 "peer_address": { 00:13:27.608 "adrfam": "IPv4", 00:13:27.608 "traddr": "10.0.0.1", 00:13:27.608 "trsvcid": "40046", 00:13:27.608 "trtype": "TCP" 00:13:27.608 }, 00:13:27.608 "qid": 0, 00:13:27.608 "state": "enabled", 00:13:27.608 "thread": "nvmf_tgt_poll_group_000" 00:13:27.608 } 00:13:27.608 ]' 00:13:27.608 20:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:27.866 20:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:27.866 20:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:27.866 20:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:27.866 20:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:27.866 20:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.866 20:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.866 20:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.125 20:29:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:03:NzU4MDU0ZjNlNDZkZGJhYmE5MTc1MzczOTNjODRmNmNlNDE3Y2FhY2NmZjRkNTdhODc4YzRmMzRiMTc5OTY2M/hKznc=: 00:13:29.059 20:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.059 20:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:29.059 20:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.059 20:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.059 20:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.059 20:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:29.059 20:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:29.059 20:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:29.059 20:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:29.318 20:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:13:29.318 20:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:29.318 20:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:29.318 20:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:29.318 20:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:29.318 20:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.318 20:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.318 20:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.318 20:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.318 20:29:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.318 20:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.318 20:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:29.576 00:13:29.576 20:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:29.576 20:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:29.576 20:29:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.833 20:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.833 20:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.833 20:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.833 20:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.833 20:29:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.833 20:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:29.833 { 00:13:29.833 "auth": { 00:13:29.833 "dhgroup": "ffdhe2048", 00:13:29.833 "digest": "sha512", 00:13:29.833 "state": "completed" 00:13:29.833 }, 00:13:29.833 "cntlid": 105, 00:13:29.833 "listen_address": { 00:13:29.833 "adrfam": "IPv4", 00:13:29.833 "traddr": "10.0.0.2", 00:13:29.833 "trsvcid": "4420", 00:13:29.833 "trtype": "TCP" 00:13:29.833 }, 00:13:29.833 "peer_address": { 00:13:29.833 "adrfam": "IPv4", 00:13:29.833 "traddr": "10.0.0.1", 00:13:29.833 "trsvcid": "55562", 00:13:29.833 "trtype": "TCP" 00:13:29.833 }, 00:13:29.833 "qid": 0, 00:13:29.833 "state": "enabled", 00:13:29.833 "thread": "nvmf_tgt_poll_group_000" 00:13:29.833 } 00:13:29.833 ]' 00:13:29.833 20:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:29.833 20:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:29.833 20:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:30.091 20:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:30.091 20:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:30.091 20:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.091 20:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.091 20:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.441 20:29:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:00:M2E3ZWFjNTQ1Y2M5OTJlYWQwYmEwNDI2Y2MyM2JiZDk5ZmUzZGE1M2EyOWZlNzBix8QYQA==: --dhchap-ctrl-secret DHHC-1:03:NWU5ZjVhY2NmYzczYmZlMGMyNDQ2YjAxNWE3M2U3NzMxNjU3ZjczYTQ0MmU4YjUxNjY2MmQzNDdiNDllZDIwOIo4Z0U=: 00:13:31.006 20:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.006 20:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:31.006 20:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.006 20:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.006 20:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.006 20:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:31.006 20:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:31.006 20:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:31.571 20:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:13:31.571 20:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:31.571 20:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:31.571 20:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:31.571 20:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:31.571 20:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.571 20:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.571 20:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.571 20:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.571 20:29:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.571 20:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.571 20:29:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:31.828 00:13:31.828 20:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:31.828 20:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.828 20:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:32.085 20:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.085 20:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.085 20:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.085 20:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.085 20:29:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.085 20:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:32.085 { 00:13:32.085 "auth": { 00:13:32.085 "dhgroup": "ffdhe2048", 00:13:32.085 "digest": "sha512", 00:13:32.085 "state": "completed" 00:13:32.085 }, 00:13:32.085 "cntlid": 107, 00:13:32.085 "listen_address": { 00:13:32.085 "adrfam": "IPv4", 00:13:32.085 "traddr": "10.0.0.2", 00:13:32.085 "trsvcid": "4420", 00:13:32.085 "trtype": "TCP" 00:13:32.085 }, 00:13:32.085 "peer_address": { 00:13:32.085 "adrfam": "IPv4", 00:13:32.085 "traddr": "10.0.0.1", 00:13:32.085 "trsvcid": "55598", 00:13:32.085 "trtype": "TCP" 00:13:32.085 }, 00:13:32.085 "qid": 0, 00:13:32.085 "state": "enabled", 00:13:32.085 "thread": "nvmf_tgt_poll_group_000" 00:13:32.085 } 00:13:32.085 ]' 00:13:32.085 20:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:32.085 20:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:32.085 20:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:32.085 20:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:32.085 20:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:32.342 20:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.342 20:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.342 20:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.599 20:29:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:01:N2NmODBlN2Q0MWIwNjVjNzc1Mzc2ZjE5YTBiYjNkMjOqK9Z2: --dhchap-ctrl-secret DHHC-1:02:NTVkMGI3M2Q0MGJiMDk1YTgyMjQyNTI0MWQ2NzYyNzAzODk3OTMxYmUxZjU3ZDY3F+CkRA==: 00:13:33.532 20:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.532 20:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:33.532 20:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.532 20:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.532 20:29:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.532 20:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:33.532 20:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:33.532 20:29:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:33.790 20:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:13:33.790 20:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:33.790 20:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:33.790 20:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:33.790 20:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:33.790 20:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.790 20:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.790 20:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.790 20:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.790 20:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.790 20:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.790 20:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.048 00:13:34.306 20:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:34.306 20:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:34.306 20:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.563 20:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.563 20:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.563 20:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.563 20:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.563 20:29:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.563 20:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:34.563 { 00:13:34.563 "auth": { 00:13:34.563 "dhgroup": "ffdhe2048", 00:13:34.563 "digest": "sha512", 00:13:34.563 "state": "completed" 00:13:34.563 }, 00:13:34.563 "cntlid": 109, 00:13:34.563 "listen_address": { 00:13:34.563 "adrfam": "IPv4", 00:13:34.563 "traddr": "10.0.0.2", 00:13:34.563 "trsvcid": "4420", 00:13:34.563 "trtype": "TCP" 00:13:34.563 }, 00:13:34.563 "peer_address": { 00:13:34.563 "adrfam": "IPv4", 00:13:34.563 "traddr": "10.0.0.1", 00:13:34.563 "trsvcid": "55614", 00:13:34.563 "trtype": "TCP" 00:13:34.563 }, 00:13:34.563 "qid": 0, 00:13:34.563 "state": "enabled", 00:13:34.563 "thread": "nvmf_tgt_poll_group_000" 00:13:34.563 } 00:13:34.563 ]' 00:13:34.563 20:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:34.563 20:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:34.563 20:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:34.563 20:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:34.563 20:29:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:34.563 20:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.563 20:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.563 20:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.821 20:29:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:02:Y2Y2YTVjMTA5ZmNkOTBkNjFkZDU5MDQxMWM2NGZhNmM5YjRlY2NlZmJlZmYwYzgwuDpLrg==: --dhchap-ctrl-secret DHHC-1:01:MDViY2Q2MTEyMDI3MmVhYjRmOGE2OGE0YzY0MmIwMzDXbShd: 00:13:35.754 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.754 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:35.754 20:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.754 20:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.754 20:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.754 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:35.754 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:35.754 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:36.012 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:13:36.012 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:36.012 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:36.012 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:36.012 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:36.012 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.012 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key3 00:13:36.012 20:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.012 20:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.012 20:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.012 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.012 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:36.270 00:13:36.270 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:36.270 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:36.270 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.528 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.528 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.528 20:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.528 20:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.528 20:29:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.528 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:36.528 { 00:13:36.528 "auth": { 00:13:36.528 "dhgroup": "ffdhe2048", 00:13:36.528 "digest": "sha512", 00:13:36.528 "state": "completed" 00:13:36.528 }, 00:13:36.528 "cntlid": 111, 00:13:36.528 "listen_address": { 00:13:36.528 "adrfam": "IPv4", 00:13:36.528 "traddr": "10.0.0.2", 00:13:36.528 "trsvcid": "4420", 00:13:36.528 "trtype": "TCP" 00:13:36.528 }, 00:13:36.528 "peer_address": { 00:13:36.528 "adrfam": "IPv4", 00:13:36.528 "traddr": "10.0.0.1", 00:13:36.528 "trsvcid": "55638", 00:13:36.528 "trtype": "TCP" 00:13:36.528 }, 00:13:36.528 "qid": 0, 00:13:36.528 "state": "enabled", 00:13:36.528 "thread": "nvmf_tgt_poll_group_000" 00:13:36.528 } 00:13:36.528 ]' 00:13:36.528 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:36.528 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:36.528 20:29:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:36.787 20:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:36.787 20:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:36.787 20:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.787 20:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.787 20:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.045 20:29:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:03:NzU4MDU0ZjNlNDZkZGJhYmE5MTc1MzczOTNjODRmNmNlNDE3Y2FhY2NmZjRkNTdhODc4YzRmMzRiMTc5OTY2M/hKznc=: 00:13:37.608 20:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.608 20:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:37.608 20:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.608 20:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.608 20:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.608 20:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:37.608 20:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:37.608 20:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:37.608 20:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:38.177 20:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:13:38.177 20:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:38.177 20:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:38.177 20:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:38.177 20:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:38.177 20:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.177 20:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.177 20:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.177 20:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.177 20:29:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.177 20:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.177 20:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.435 00:13:38.435 20:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:38.435 20:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:38.435 20:29:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.693 20:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.694 20:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.694 20:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.694 20:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.694 20:30:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.694 20:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:38.694 { 00:13:38.694 "auth": { 00:13:38.694 "dhgroup": "ffdhe3072", 00:13:38.694 "digest": "sha512", 00:13:38.694 "state": "completed" 00:13:38.694 }, 00:13:38.694 "cntlid": 113, 00:13:38.694 "listen_address": { 00:13:38.694 "adrfam": "IPv4", 00:13:38.694 "traddr": "10.0.0.2", 00:13:38.694 "trsvcid": "4420", 00:13:38.694 "trtype": "TCP" 00:13:38.694 }, 00:13:38.694 "peer_address": { 00:13:38.694 "adrfam": "IPv4", 00:13:38.694 "traddr": "10.0.0.1", 00:13:38.694 "trsvcid": "41558", 00:13:38.694 "trtype": "TCP" 00:13:38.694 }, 00:13:38.694 "qid": 0, 00:13:38.694 "state": "enabled", 00:13:38.694 "thread": "nvmf_tgt_poll_group_000" 00:13:38.694 } 00:13:38.694 ]' 00:13:38.694 20:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:38.952 20:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:38.952 20:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:38.952 20:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:38.952 20:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:38.952 20:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.952 20:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.952 20:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.210 20:30:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:00:M2E3ZWFjNTQ1Y2M5OTJlYWQwYmEwNDI2Y2MyM2JiZDk5ZmUzZGE1M2EyOWZlNzBix8QYQA==: --dhchap-ctrl-secret DHHC-1:03:NWU5ZjVhY2NmYzczYmZlMGMyNDQ2YjAxNWE3M2U3NzMxNjU3ZjczYTQ0MmU4YjUxNjY2MmQzNDdiNDllZDIwOIo4Z0U=: 00:13:40.145 20:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.145 20:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:40.145 20:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.145 20:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.145 20:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.145 20:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:40.145 20:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:40.145 20:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:40.403 20:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:13:40.403 20:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:40.403 20:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:40.403 20:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:40.403 20:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:40.403 20:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.403 20:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.403 20:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.403 20:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.403 20:30:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.403 20:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.403 20:30:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.661 00:13:40.661 20:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:40.661 20:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:40.661 20:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.228 20:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.228 20:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.228 20:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.228 20:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.228 20:30:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.228 20:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:41.228 { 00:13:41.228 "auth": { 00:13:41.228 "dhgroup": "ffdhe3072", 00:13:41.228 "digest": "sha512", 00:13:41.228 "state": "completed" 00:13:41.228 }, 00:13:41.228 "cntlid": 115, 00:13:41.228 "listen_address": { 00:13:41.228 "adrfam": "IPv4", 00:13:41.228 "traddr": "10.0.0.2", 00:13:41.228 "trsvcid": "4420", 00:13:41.228 "trtype": "TCP" 00:13:41.228 }, 00:13:41.228 "peer_address": { 00:13:41.228 "adrfam": "IPv4", 00:13:41.228 "traddr": "10.0.0.1", 00:13:41.228 "trsvcid": "41590", 00:13:41.228 "trtype": "TCP" 00:13:41.228 }, 00:13:41.228 "qid": 0, 00:13:41.228 "state": "enabled", 00:13:41.228 "thread": "nvmf_tgt_poll_group_000" 00:13:41.228 } 00:13:41.228 ]' 00:13:41.228 20:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:41.228 20:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:41.228 20:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:41.228 20:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:41.228 20:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:41.228 20:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.228 20:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.228 20:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.486 20:30:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:01:N2NmODBlN2Q0MWIwNjVjNzc1Mzc2ZjE5YTBiYjNkMjOqK9Z2: --dhchap-ctrl-secret DHHC-1:02:NTVkMGI3M2Q0MGJiMDk1YTgyMjQyNTI0MWQ2NzYyNzAzODk3OTMxYmUxZjU3ZDY3F+CkRA==: 00:13:42.420 20:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.420 20:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:42.420 20:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.420 20:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.420 20:30:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.420 20:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:42.420 20:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:42.420 20:30:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:42.679 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:13:42.679 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:42.679 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:42.679 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:42.679 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:42.679 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.679 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.679 20:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.679 20:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.679 20:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.679 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:42.679 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.256 00:13:43.256 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:43.256 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.256 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:43.533 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.533 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.533 20:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.533 20:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.533 20:30:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.533 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:43.533 { 00:13:43.533 "auth": { 00:13:43.533 "dhgroup": "ffdhe3072", 00:13:43.533 "digest": "sha512", 00:13:43.533 "state": "completed" 00:13:43.533 }, 00:13:43.533 "cntlid": 117, 00:13:43.533 "listen_address": { 00:13:43.533 "adrfam": "IPv4", 00:13:43.533 "traddr": "10.0.0.2", 00:13:43.533 "trsvcid": "4420", 00:13:43.533 "trtype": "TCP" 00:13:43.533 }, 00:13:43.533 "peer_address": { 00:13:43.533 "adrfam": "IPv4", 00:13:43.533 "traddr": "10.0.0.1", 00:13:43.533 "trsvcid": "41622", 00:13:43.533 "trtype": "TCP" 00:13:43.533 }, 00:13:43.533 "qid": 0, 00:13:43.533 "state": "enabled", 00:13:43.533 "thread": "nvmf_tgt_poll_group_000" 00:13:43.533 } 00:13:43.533 ]' 00:13:43.533 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:43.533 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:43.533 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:43.533 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:43.533 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:43.533 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.533 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.533 20:30:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.100 20:30:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:02:Y2Y2YTVjMTA5ZmNkOTBkNjFkZDU5MDQxMWM2NGZhNmM5YjRlY2NlZmJlZmYwYzgwuDpLrg==: --dhchap-ctrl-secret DHHC-1:01:MDViY2Q2MTEyMDI3MmVhYjRmOGE2OGE0YzY0MmIwMzDXbShd: 00:13:44.666 20:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.666 20:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:44.666 20:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.666 20:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.666 20:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.666 20:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:44.666 20:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:44.666 20:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:44.927 20:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:13:44.927 20:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:44.927 20:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:44.927 20:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:13:44.927 20:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:44.927 20:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.927 20:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key3 00:13:44.927 20:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.927 20:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.927 20:30:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.927 20:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:44.927 20:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:45.494 00:13:45.494 20:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:45.494 20:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.494 20:30:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:45.753 20:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.753 20:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.753 20:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.753 20:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.753 20:30:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.753 20:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:45.753 { 00:13:45.753 "auth": { 00:13:45.753 "dhgroup": "ffdhe3072", 00:13:45.753 "digest": "sha512", 00:13:45.753 "state": "completed" 00:13:45.753 }, 00:13:45.753 "cntlid": 119, 00:13:45.753 "listen_address": { 00:13:45.753 "adrfam": "IPv4", 00:13:45.753 "traddr": "10.0.0.2", 00:13:45.753 "trsvcid": "4420", 00:13:45.753 "trtype": "TCP" 00:13:45.753 }, 00:13:45.753 "peer_address": { 00:13:45.753 "adrfam": "IPv4", 00:13:45.753 "traddr": "10.0.0.1", 00:13:45.753 "trsvcid": "41650", 00:13:45.753 "trtype": "TCP" 00:13:45.753 }, 00:13:45.753 "qid": 0, 00:13:45.753 "state": "enabled", 00:13:45.753 "thread": "nvmf_tgt_poll_group_000" 00:13:45.753 } 00:13:45.753 ]' 00:13:45.753 20:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:45.753 20:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:45.753 20:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:45.753 20:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:45.753 20:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:46.012 20:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.012 20:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.012 20:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.270 20:30:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:03:NzU4MDU0ZjNlNDZkZGJhYmE5MTc1MzczOTNjODRmNmNlNDE3Y2FhY2NmZjRkNTdhODc4YzRmMzRiMTc5OTY2M/hKznc=: 00:13:47.205 20:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.205 20:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:47.205 20:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.205 20:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.205 20:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.205 20:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:47.205 20:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:47.205 20:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:47.206 20:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:47.206 20:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:13:47.206 20:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:47.206 20:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:47.206 20:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:47.206 20:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:47.206 20:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.206 20:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.206 20:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.206 20:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.206 20:30:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.206 20:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.206 20:30:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.772 00:13:47.772 20:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:47.772 20:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.772 20:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:48.030 20:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.030 20:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.030 20:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.030 20:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.030 20:30:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.030 20:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:48.030 { 00:13:48.030 "auth": { 00:13:48.030 "dhgroup": "ffdhe4096", 00:13:48.030 "digest": "sha512", 00:13:48.030 "state": "completed" 00:13:48.030 }, 00:13:48.030 "cntlid": 121, 00:13:48.030 "listen_address": { 00:13:48.030 "adrfam": "IPv4", 00:13:48.030 "traddr": "10.0.0.2", 00:13:48.030 "trsvcid": "4420", 00:13:48.030 "trtype": "TCP" 00:13:48.030 }, 00:13:48.030 "peer_address": { 00:13:48.030 "adrfam": "IPv4", 00:13:48.030 "traddr": "10.0.0.1", 00:13:48.030 "trsvcid": "41670", 00:13:48.030 "trtype": "TCP" 00:13:48.030 }, 00:13:48.030 "qid": 0, 00:13:48.030 "state": "enabled", 00:13:48.030 "thread": "nvmf_tgt_poll_group_000" 00:13:48.030 } 00:13:48.030 ]' 00:13:48.030 20:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:48.030 20:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:48.030 20:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:48.030 20:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:48.030 20:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:48.031 20:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.031 20:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.031 20:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.597 20:30:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:00:M2E3ZWFjNTQ1Y2M5OTJlYWQwYmEwNDI2Y2MyM2JiZDk5ZmUzZGE1M2EyOWZlNzBix8QYQA==: --dhchap-ctrl-secret DHHC-1:03:NWU5ZjVhY2NmYzczYmZlMGMyNDQ2YjAxNWE3M2U3NzMxNjU3ZjczYTQ0MmU4YjUxNjY2MmQzNDdiNDllZDIwOIo4Z0U=: 00:13:49.164 20:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.164 20:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:49.164 20:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.164 20:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.164 20:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.164 20:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:49.164 20:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:49.164 20:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:49.422 20:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:13:49.422 20:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:49.422 20:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:49.422 20:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:49.422 20:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:49.422 20:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.422 20:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.422 20:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.422 20:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.422 20:30:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.422 20:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.423 20:30:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.015 00:13:50.015 20:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:50.015 20:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:50.015 20:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.293 20:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.293 20:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.293 20:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.293 20:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.293 20:30:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.293 20:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:50.293 { 00:13:50.293 "auth": { 00:13:50.293 "dhgroup": "ffdhe4096", 00:13:50.293 "digest": "sha512", 00:13:50.293 "state": "completed" 00:13:50.293 }, 00:13:50.293 "cntlid": 123, 00:13:50.293 "listen_address": { 00:13:50.293 "adrfam": "IPv4", 00:13:50.293 "traddr": "10.0.0.2", 00:13:50.293 "trsvcid": "4420", 00:13:50.293 "trtype": "TCP" 00:13:50.293 }, 00:13:50.293 "peer_address": { 00:13:50.293 "adrfam": "IPv4", 00:13:50.293 "traddr": "10.0.0.1", 00:13:50.293 "trsvcid": "49028", 00:13:50.293 "trtype": "TCP" 00:13:50.293 }, 00:13:50.293 "qid": 0, 00:13:50.293 "state": "enabled", 00:13:50.293 "thread": "nvmf_tgt_poll_group_000" 00:13:50.293 } 00:13:50.293 ]' 00:13:50.293 20:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:50.293 20:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:50.293 20:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:50.293 20:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:50.293 20:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:50.551 20:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.551 20:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.551 20:30:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.809 20:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:01:N2NmODBlN2Q0MWIwNjVjNzc1Mzc2ZjE5YTBiYjNkMjOqK9Z2: --dhchap-ctrl-secret DHHC-1:02:NTVkMGI3M2Q0MGJiMDk1YTgyMjQyNTI0MWQ2NzYyNzAzODk3OTMxYmUxZjU3ZDY3F+CkRA==: 00:13:51.375 20:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.375 20:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:51.375 20:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.375 20:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.375 20:30:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.375 20:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:51.375 20:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:51.375 20:30:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:51.633 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:13:51.633 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:51.634 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:51.634 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:51.634 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:51.634 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.634 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.634 20:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.634 20:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.634 20:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.634 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.634 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.200 00:13:52.200 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:52.200 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.200 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:52.459 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.459 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.459 20:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.459 20:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.459 20:30:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.459 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:52.459 { 00:13:52.459 "auth": { 00:13:52.459 "dhgroup": "ffdhe4096", 00:13:52.459 "digest": "sha512", 00:13:52.459 "state": "completed" 00:13:52.459 }, 00:13:52.459 "cntlid": 125, 00:13:52.459 "listen_address": { 00:13:52.459 "adrfam": "IPv4", 00:13:52.459 "traddr": "10.0.0.2", 00:13:52.459 "trsvcid": "4420", 00:13:52.459 "trtype": "TCP" 00:13:52.459 }, 00:13:52.459 "peer_address": { 00:13:52.459 "adrfam": "IPv4", 00:13:52.459 "traddr": "10.0.0.1", 00:13:52.459 "trsvcid": "49068", 00:13:52.459 "trtype": "TCP" 00:13:52.459 }, 00:13:52.459 "qid": 0, 00:13:52.459 "state": "enabled", 00:13:52.459 "thread": "nvmf_tgt_poll_group_000" 00:13:52.459 } 00:13:52.459 ]' 00:13:52.459 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:52.459 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:52.459 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:52.459 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:52.459 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:52.459 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.459 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.459 20:30:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.717 20:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:02:Y2Y2YTVjMTA5ZmNkOTBkNjFkZDU5MDQxMWM2NGZhNmM5YjRlY2NlZmJlZmYwYzgwuDpLrg==: --dhchap-ctrl-secret DHHC-1:01:MDViY2Q2MTEyMDI3MmVhYjRmOGE2OGE0YzY0MmIwMzDXbShd: 00:13:53.652 20:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.652 20:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:53.652 20:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.652 20:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.652 20:30:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.652 20:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:53.652 20:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:53.652 20:30:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:53.910 20:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:13:53.910 20:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:53.910 20:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:53.910 20:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:13:53.910 20:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:53.910 20:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.910 20:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key3 00:13:53.910 20:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.910 20:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.910 20:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.910 20:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:53.910 20:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:54.168 00:13:54.168 20:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:54.168 20:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:54.168 20:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.426 20:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.426 20:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.426 20:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.426 20:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.426 20:30:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.426 20:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:54.426 { 00:13:54.426 "auth": { 00:13:54.426 "dhgroup": "ffdhe4096", 00:13:54.426 "digest": "sha512", 00:13:54.426 "state": "completed" 00:13:54.426 }, 00:13:54.426 "cntlid": 127, 00:13:54.426 "listen_address": { 00:13:54.426 "adrfam": "IPv4", 00:13:54.426 "traddr": "10.0.0.2", 00:13:54.426 "trsvcid": "4420", 00:13:54.426 "trtype": "TCP" 00:13:54.426 }, 00:13:54.426 "peer_address": { 00:13:54.426 "adrfam": "IPv4", 00:13:54.426 "traddr": "10.0.0.1", 00:13:54.426 "trsvcid": "49102", 00:13:54.426 "trtype": "TCP" 00:13:54.426 }, 00:13:54.426 "qid": 0, 00:13:54.426 "state": "enabled", 00:13:54.426 "thread": "nvmf_tgt_poll_group_000" 00:13:54.426 } 00:13:54.426 ]' 00:13:54.426 20:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:54.683 20:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:54.683 20:30:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:54.683 20:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:54.683 20:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:54.683 20:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.683 20:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.683 20:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.941 20:30:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:03:NzU4MDU0ZjNlNDZkZGJhYmE5MTc1MzczOTNjODRmNmNlNDE3Y2FhY2NmZjRkNTdhODc4YzRmMzRiMTc5OTY2M/hKznc=: 00:13:55.877 20:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.877 20:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:55.877 20:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.877 20:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.877 20:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.877 20:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:55.877 20:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:55.877 20:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:55.877 20:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:56.136 20:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:13:56.136 20:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:56.136 20:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:56.136 20:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:56.136 20:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:56.136 20:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.136 20:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.136 20:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.136 20:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.136 20:30:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.136 20:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.136 20:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.456 00:13:56.456 20:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:56.456 20:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.456 20:30:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:56.713 20:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.713 20:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.714 20:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.714 20:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.714 20:30:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.714 20:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:56.714 { 00:13:56.714 "auth": { 00:13:56.714 "dhgroup": "ffdhe6144", 00:13:56.714 "digest": "sha512", 00:13:56.714 "state": "completed" 00:13:56.714 }, 00:13:56.714 "cntlid": 129, 00:13:56.714 "listen_address": { 00:13:56.714 "adrfam": "IPv4", 00:13:56.714 "traddr": "10.0.0.2", 00:13:56.714 "trsvcid": "4420", 00:13:56.714 "trtype": "TCP" 00:13:56.714 }, 00:13:56.714 "peer_address": { 00:13:56.714 "adrfam": "IPv4", 00:13:56.714 "traddr": "10.0.0.1", 00:13:56.714 "trsvcid": "49130", 00:13:56.714 "trtype": "TCP" 00:13:56.714 }, 00:13:56.714 "qid": 0, 00:13:56.714 "state": "enabled", 00:13:56.714 "thread": "nvmf_tgt_poll_group_000" 00:13:56.714 } 00:13:56.714 ]' 00:13:56.972 20:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:56.972 20:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:56.972 20:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:56.972 20:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:56.972 20:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:56.972 20:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.972 20:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.972 20:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.230 20:30:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:00:M2E3ZWFjNTQ1Y2M5OTJlYWQwYmEwNDI2Y2MyM2JiZDk5ZmUzZGE1M2EyOWZlNzBix8QYQA==: --dhchap-ctrl-secret DHHC-1:03:NWU5ZjVhY2NmYzczYmZlMGMyNDQ2YjAxNWE3M2U3NzMxNjU3ZjczYTQ0MmU4YjUxNjY2MmQzNDdiNDllZDIwOIo4Z0U=: 00:13:58.165 20:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.165 20:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:13:58.165 20:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.165 20:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.165 20:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.165 20:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:58.165 20:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:58.165 20:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:58.165 20:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:13:58.165 20:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:58.165 20:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:58.165 20:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:13:58.165 20:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:58.165 20:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.165 20:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.165 20:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.165 20:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.165 20:30:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.165 20:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.165 20:30:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.732 00:13:58.732 20:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:58.732 20:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.732 20:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:58.990 20:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.990 20:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.990 20:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.990 20:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.990 20:30:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.990 20:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:58.990 { 00:13:58.990 "auth": { 00:13:58.990 "dhgroup": "ffdhe6144", 00:13:58.990 "digest": "sha512", 00:13:58.990 "state": "completed" 00:13:58.990 }, 00:13:58.990 "cntlid": 131, 00:13:58.990 "listen_address": { 00:13:58.990 "adrfam": "IPv4", 00:13:58.990 "traddr": "10.0.0.2", 00:13:58.990 "trsvcid": "4420", 00:13:58.990 "trtype": "TCP" 00:13:58.990 }, 00:13:58.990 "peer_address": { 00:13:58.990 "adrfam": "IPv4", 00:13:58.990 "traddr": "10.0.0.1", 00:13:58.990 "trsvcid": "35616", 00:13:58.990 "trtype": "TCP" 00:13:58.990 }, 00:13:58.990 "qid": 0, 00:13:58.990 "state": "enabled", 00:13:58.990 "thread": "nvmf_tgt_poll_group_000" 00:13:58.990 } 00:13:58.990 ]' 00:13:58.990 20:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:58.990 20:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:58.990 20:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:59.249 20:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:59.249 20:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:59.249 20:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.249 20:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.249 20:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.507 20:30:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:01:N2NmODBlN2Q0MWIwNjVjNzc1Mzc2ZjE5YTBiYjNkMjOqK9Z2: --dhchap-ctrl-secret DHHC-1:02:NTVkMGI3M2Q0MGJiMDk1YTgyMjQyNTI0MWQ2NzYyNzAzODk3OTMxYmUxZjU3ZDY3F+CkRA==: 00:14:00.442 20:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.442 20:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:14:00.442 20:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.442 20:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.442 20:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.442 20:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:00.442 20:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:00.442 20:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:00.700 20:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:14:00.700 20:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:00.700 20:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:00.700 20:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:00.700 20:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:00.700 20:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.700 20:30:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.700 20:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.700 20:30:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.700 20:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.700 20:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.700 20:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.267 00:14:01.267 20:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:01.267 20:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:01.267 20:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.525 20:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.525 20:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.525 20:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.525 20:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.525 20:30:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.525 20:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:01.525 { 00:14:01.525 "auth": { 00:14:01.525 "dhgroup": "ffdhe6144", 00:14:01.525 "digest": "sha512", 00:14:01.525 "state": "completed" 00:14:01.525 }, 00:14:01.525 "cntlid": 133, 00:14:01.525 "listen_address": { 00:14:01.525 "adrfam": "IPv4", 00:14:01.525 "traddr": "10.0.0.2", 00:14:01.525 "trsvcid": "4420", 00:14:01.525 "trtype": "TCP" 00:14:01.525 }, 00:14:01.525 "peer_address": { 00:14:01.525 "adrfam": "IPv4", 00:14:01.525 "traddr": "10.0.0.1", 00:14:01.525 "trsvcid": "35652", 00:14:01.525 "trtype": "TCP" 00:14:01.525 }, 00:14:01.525 "qid": 0, 00:14:01.525 "state": "enabled", 00:14:01.525 "thread": "nvmf_tgt_poll_group_000" 00:14:01.525 } 00:14:01.525 ]' 00:14:01.525 20:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:01.525 20:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:01.525 20:30:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:01.783 20:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:01.783 20:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:01.783 20:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.783 20:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.783 20:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.042 20:30:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:02:Y2Y2YTVjMTA5ZmNkOTBkNjFkZDU5MDQxMWM2NGZhNmM5YjRlY2NlZmJlZmYwYzgwuDpLrg==: --dhchap-ctrl-secret DHHC-1:01:MDViY2Q2MTEyMDI3MmVhYjRmOGE2OGE0YzY0MmIwMzDXbShd: 00:14:02.977 20:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.977 20:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:14:02.977 20:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.977 20:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.977 20:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.977 20:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:02.977 20:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:02.977 20:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:02.977 20:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:14:02.977 20:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:02.977 20:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:02.977 20:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:02.977 20:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:02.977 20:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.977 20:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key3 00:14:02.977 20:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.977 20:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.977 20:30:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.977 20:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:02.977 20:30:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:03.543 00:14:03.543 20:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:03.543 20:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:03.543 20:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.110 20:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.110 20:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.110 20:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.110 20:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.110 20:30:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.110 20:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:04.110 { 00:14:04.110 "auth": { 00:14:04.110 "dhgroup": "ffdhe6144", 00:14:04.110 "digest": "sha512", 00:14:04.110 "state": "completed" 00:14:04.110 }, 00:14:04.110 "cntlid": 135, 00:14:04.110 "listen_address": { 00:14:04.110 "adrfam": "IPv4", 00:14:04.110 "traddr": "10.0.0.2", 00:14:04.110 "trsvcid": "4420", 00:14:04.110 "trtype": "TCP" 00:14:04.110 }, 00:14:04.110 "peer_address": { 00:14:04.110 "adrfam": "IPv4", 00:14:04.110 "traddr": "10.0.0.1", 00:14:04.110 "trsvcid": "35678", 00:14:04.110 "trtype": "TCP" 00:14:04.110 }, 00:14:04.110 "qid": 0, 00:14:04.110 "state": "enabled", 00:14:04.110 "thread": "nvmf_tgt_poll_group_000" 00:14:04.110 } 00:14:04.110 ]' 00:14:04.110 20:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:04.110 20:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:04.110 20:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:04.110 20:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:04.110 20:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:04.110 20:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.110 20:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.110 20:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.368 20:30:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:03:NzU4MDU0ZjNlNDZkZGJhYmE5MTc1MzczOTNjODRmNmNlNDE3Y2FhY2NmZjRkNTdhODc4YzRmMzRiMTc5OTY2M/hKznc=: 00:14:05.299 20:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.299 20:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:14:05.299 20:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.299 20:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.299 20:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.299 20:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:05.299 20:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:05.299 20:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:05.299 20:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:05.556 20:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:14:05.556 20:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:05.556 20:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:05.556 20:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:05.556 20:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:05.556 20:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.556 20:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.556 20:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.556 20:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.556 20:30:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.556 20:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.556 20:30:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.122 00:14:06.122 20:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:06.122 20:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.122 20:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:06.380 20:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.380 20:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.380 20:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.380 20:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.380 20:30:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.380 20:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:06.380 { 00:14:06.380 "auth": { 00:14:06.380 "dhgroup": "ffdhe8192", 00:14:06.380 "digest": "sha512", 00:14:06.380 "state": "completed" 00:14:06.380 }, 00:14:06.380 "cntlid": 137, 00:14:06.380 "listen_address": { 00:14:06.380 "adrfam": "IPv4", 00:14:06.380 "traddr": "10.0.0.2", 00:14:06.380 "trsvcid": "4420", 00:14:06.380 "trtype": "TCP" 00:14:06.380 }, 00:14:06.380 "peer_address": { 00:14:06.380 "adrfam": "IPv4", 00:14:06.380 "traddr": "10.0.0.1", 00:14:06.380 "trsvcid": "35712", 00:14:06.380 "trtype": "TCP" 00:14:06.380 }, 00:14:06.380 "qid": 0, 00:14:06.380 "state": "enabled", 00:14:06.380 "thread": "nvmf_tgt_poll_group_000" 00:14:06.380 } 00:14:06.380 ]' 00:14:06.380 20:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:06.640 20:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:06.640 20:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:06.640 20:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:06.640 20:30:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:06.640 20:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.640 20:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.640 20:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.898 20:30:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:00:M2E3ZWFjNTQ1Y2M5OTJlYWQwYmEwNDI2Y2MyM2JiZDk5ZmUzZGE1M2EyOWZlNzBix8QYQA==: --dhchap-ctrl-secret DHHC-1:03:NWU5ZjVhY2NmYzczYmZlMGMyNDQ2YjAxNWE3M2U3NzMxNjU3ZjczYTQ0MmU4YjUxNjY2MmQzNDdiNDllZDIwOIo4Z0U=: 00:14:07.830 20:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.830 20:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:14:07.830 20:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.830 20:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.830 20:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.830 20:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:07.830 20:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:07.830 20:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:08.088 20:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:14:08.088 20:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:08.088 20:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:08.088 20:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:08.088 20:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:08.088 20:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.088 20:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.088 20:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.088 20:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.088 20:30:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.088 20:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.088 20:30:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.035 00:14:09.035 20:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:09.035 20:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:09.035 20:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.293 20:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.293 20:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.293 20:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.293 20:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.293 20:30:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.293 20:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:09.293 { 00:14:09.293 "auth": { 00:14:09.293 "dhgroup": "ffdhe8192", 00:14:09.293 "digest": "sha512", 00:14:09.293 "state": "completed" 00:14:09.293 }, 00:14:09.293 "cntlid": 139, 00:14:09.293 "listen_address": { 00:14:09.293 "adrfam": "IPv4", 00:14:09.293 "traddr": "10.0.0.2", 00:14:09.293 "trsvcid": "4420", 00:14:09.293 "trtype": "TCP" 00:14:09.293 }, 00:14:09.293 "peer_address": { 00:14:09.293 "adrfam": "IPv4", 00:14:09.293 "traddr": "10.0.0.1", 00:14:09.293 "trsvcid": "42982", 00:14:09.293 "trtype": "TCP" 00:14:09.293 }, 00:14:09.293 "qid": 0, 00:14:09.293 "state": "enabled", 00:14:09.293 "thread": "nvmf_tgt_poll_group_000" 00:14:09.293 } 00:14:09.293 ]' 00:14:09.293 20:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:09.293 20:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:09.293 20:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:09.293 20:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:09.293 20:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:09.293 20:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.293 20:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.293 20:30:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.552 20:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:01:N2NmODBlN2Q0MWIwNjVjNzc1Mzc2ZjE5YTBiYjNkMjOqK9Z2: --dhchap-ctrl-secret DHHC-1:02:NTVkMGI3M2Q0MGJiMDk1YTgyMjQyNTI0MWQ2NzYyNzAzODk3OTMxYmUxZjU3ZDY3F+CkRA==: 00:14:10.487 20:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.487 20:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:14:10.487 20:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.487 20:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.487 20:30:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.487 20:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:10.487 20:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:10.487 20:30:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:10.745 20:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:14:10.745 20:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:10.745 20:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:10.745 20:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:10.745 20:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:10.745 20:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.745 20:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.745 20:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.745 20:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.745 20:30:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.745 20:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.745 20:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.311 00:14:11.311 20:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:11.311 20:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.311 20:30:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:11.875 20:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.875 20:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.875 20:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.875 20:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.875 20:30:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.875 20:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:11.875 { 00:14:11.875 "auth": { 00:14:11.875 "dhgroup": "ffdhe8192", 00:14:11.875 "digest": "sha512", 00:14:11.875 "state": "completed" 00:14:11.875 }, 00:14:11.875 "cntlid": 141, 00:14:11.875 "listen_address": { 00:14:11.875 "adrfam": "IPv4", 00:14:11.875 "traddr": "10.0.0.2", 00:14:11.875 "trsvcid": "4420", 00:14:11.875 "trtype": "TCP" 00:14:11.875 }, 00:14:11.875 "peer_address": { 00:14:11.875 "adrfam": "IPv4", 00:14:11.875 "traddr": "10.0.0.1", 00:14:11.875 "trsvcid": "43002", 00:14:11.875 "trtype": "TCP" 00:14:11.875 }, 00:14:11.875 "qid": 0, 00:14:11.875 "state": "enabled", 00:14:11.875 "thread": "nvmf_tgt_poll_group_000" 00:14:11.875 } 00:14:11.875 ]' 00:14:11.875 20:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:11.875 20:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:11.875 20:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:11.875 20:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:11.875 20:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:11.875 20:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.875 20:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.875 20:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.438 20:30:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:02:Y2Y2YTVjMTA5ZmNkOTBkNjFkZDU5MDQxMWM2NGZhNmM5YjRlY2NlZmJlZmYwYzgwuDpLrg==: --dhchap-ctrl-secret DHHC-1:01:MDViY2Q2MTEyMDI3MmVhYjRmOGE2OGE0YzY0MmIwMzDXbShd: 00:14:13.004 20:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.004 20:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:14:13.004 20:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.004 20:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.004 20:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.004 20:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:13.004 20:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:13.004 20:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:13.262 20:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:14:13.262 20:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:13.262 20:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:13.262 20:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:13.263 20:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:13.263 20:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.263 20:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key3 00:14:13.263 20:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.263 20:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.263 20:30:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.263 20:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:13.263 20:30:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:13.828 00:14:14.086 20:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:14.086 20:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:14.086 20:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.345 20:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.345 20:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.345 20:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.345 20:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.345 20:30:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.345 20:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:14.345 { 00:14:14.345 "auth": { 00:14:14.345 "dhgroup": "ffdhe8192", 00:14:14.345 "digest": "sha512", 00:14:14.345 "state": "completed" 00:14:14.345 }, 00:14:14.345 "cntlid": 143, 00:14:14.345 "listen_address": { 00:14:14.345 "adrfam": "IPv4", 00:14:14.345 "traddr": "10.0.0.2", 00:14:14.345 "trsvcid": "4420", 00:14:14.345 "trtype": "TCP" 00:14:14.345 }, 00:14:14.345 "peer_address": { 00:14:14.345 "adrfam": "IPv4", 00:14:14.345 "traddr": "10.0.0.1", 00:14:14.345 "trsvcid": "43030", 00:14:14.345 "trtype": "TCP" 00:14:14.345 }, 00:14:14.345 "qid": 0, 00:14:14.345 "state": "enabled", 00:14:14.345 "thread": "nvmf_tgt_poll_group_000" 00:14:14.345 } 00:14:14.345 ]' 00:14:14.345 20:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:14.345 20:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:14.345 20:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:14.345 20:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:14.345 20:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:14.345 20:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.345 20:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.345 20:30:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.911 20:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:03:NzU4MDU0ZjNlNDZkZGJhYmE5MTc1MzczOTNjODRmNmNlNDE3Y2FhY2NmZjRkNTdhODc4YzRmMzRiMTc5OTY2M/hKznc=: 00:14:15.478 20:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.478 20:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:14:15.478 20:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.478 20:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.478 20:30:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.478 20:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:14:15.478 20:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:14:15.478 20:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:14:15.478 20:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:15.478 20:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:15.478 20:30:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:15.736 20:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:14:15.736 20:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.736 20:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:15.736 20:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:15.736 20:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:15.736 20:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.736 20:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.736 20:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.736 20:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.736 20:30:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.736 20:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.736 20:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.695 00:14:16.695 20:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:16.695 20:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.695 20:30:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:16.695 20:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.695 20:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.696 20:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.696 20:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.696 20:30:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.696 20:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:16.696 { 00:14:16.696 "auth": { 00:14:16.696 "dhgroup": "ffdhe8192", 00:14:16.696 "digest": "sha512", 00:14:16.696 "state": "completed" 00:14:16.696 }, 00:14:16.696 "cntlid": 145, 00:14:16.696 "listen_address": { 00:14:16.696 "adrfam": "IPv4", 00:14:16.696 "traddr": "10.0.0.2", 00:14:16.696 "trsvcid": "4420", 00:14:16.696 "trtype": "TCP" 00:14:16.696 }, 00:14:16.696 "peer_address": { 00:14:16.696 "adrfam": "IPv4", 00:14:16.696 "traddr": "10.0.0.1", 00:14:16.696 "trsvcid": "43058", 00:14:16.696 "trtype": "TCP" 00:14:16.696 }, 00:14:16.696 "qid": 0, 00:14:16.696 "state": "enabled", 00:14:16.696 "thread": "nvmf_tgt_poll_group_000" 00:14:16.696 } 00:14:16.696 ]' 00:14:16.696 20:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:16.954 20:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:16.954 20:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:16.954 20:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:16.954 20:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:16.954 20:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.954 20:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.954 20:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.214 20:30:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:00:M2E3ZWFjNTQ1Y2M5OTJlYWQwYmEwNDI2Y2MyM2JiZDk5ZmUzZGE1M2EyOWZlNzBix8QYQA==: --dhchap-ctrl-secret DHHC-1:03:NWU5ZjVhY2NmYzczYmZlMGMyNDQ2YjAxNWE3M2U3NzMxNjU3ZjczYTQ0MmU4YjUxNjY2MmQzNDdiNDllZDIwOIo4Z0U=: 00:14:18.149 20:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.149 20:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:14:18.149 20:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.149 20:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.149 20:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.149 20:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 00:14:18.149 20:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.149 20:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.149 20:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.149 20:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:18.149 20:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:18.149 20:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:18.149 20:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:18.149 20:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:18.149 20:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:18.149 20:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:18.149 20:30:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:18.149 20:30:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:14:18.714 2024/07/15 20:30:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:18.714 request: 00:14:18.714 { 00:14:18.714 "method": "bdev_nvme_attach_controller", 00:14:18.714 "params": { 00:14:18.714 "name": "nvme0", 00:14:18.714 "trtype": "tcp", 00:14:18.714 "traddr": "10.0.0.2", 00:14:18.714 "adrfam": "ipv4", 00:14:18.714 "trsvcid": "4420", 00:14:18.714 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:18.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5", 00:14:18.714 "prchk_reftag": false, 00:14:18.714 "prchk_guard": false, 00:14:18.714 "hdgst": false, 00:14:18.714 "ddgst": false, 00:14:18.714 "dhchap_key": "key2" 00:14:18.714 } 00:14:18.714 } 00:14:18.714 Got JSON-RPC error response 00:14:18.714 GoRPCClient: error on JSON-RPC call 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:18.714 20:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:19.647 2024/07/15 20:30:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:19.647 request: 00:14:19.647 { 00:14:19.647 "method": "bdev_nvme_attach_controller", 00:14:19.647 "params": { 00:14:19.647 "name": "nvme0", 00:14:19.647 "trtype": "tcp", 00:14:19.647 "traddr": "10.0.0.2", 00:14:19.647 "adrfam": "ipv4", 00:14:19.647 "trsvcid": "4420", 00:14:19.647 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:19.647 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5", 00:14:19.647 "prchk_reftag": false, 00:14:19.647 "prchk_guard": false, 00:14:19.647 "hdgst": false, 00:14:19.647 "ddgst": false, 00:14:19.647 "dhchap_key": "key1", 00:14:19.647 "dhchap_ctrlr_key": "ckey2" 00:14:19.647 } 00:14:19.647 } 00:14:19.647 Got JSON-RPC error response 00:14:19.647 GoRPCClient: error on JSON-RPC call 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key1 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.647 20:30:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.217 2024/07/15 20:30:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:20.217 request: 00:14:20.218 { 00:14:20.218 "method": "bdev_nvme_attach_controller", 00:14:20.218 "params": { 00:14:20.218 "name": "nvme0", 00:14:20.218 "trtype": "tcp", 00:14:20.218 "traddr": "10.0.0.2", 00:14:20.218 "adrfam": "ipv4", 00:14:20.218 "trsvcid": "4420", 00:14:20.218 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:20.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5", 00:14:20.218 "prchk_reftag": false, 00:14:20.218 "prchk_guard": false, 00:14:20.218 "hdgst": false, 00:14:20.218 "ddgst": false, 00:14:20.218 "dhchap_key": "key1", 00:14:20.218 "dhchap_ctrlr_key": "ckey1" 00:14:20.218 } 00:14:20.218 } 00:14:20.218 Got JSON-RPC error response 00:14:20.218 GoRPCClient: error on JSON-RPC call 00:14:20.218 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:20.218 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:20.218 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:20.218 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:20.218 20:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:14:20.218 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.218 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.218 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.218 20:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 77783 00:14:20.218 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 77783 ']' 00:14:20.218 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 77783 00:14:20.218 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:20.218 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:20.218 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77783 00:14:20.218 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:20.218 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:20.218 killing process with pid 77783 00:14:20.218 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77783' 00:14:20.218 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 77783 00:14:20.218 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 77783 00:14:20.477 20:30:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:20.477 20:30:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:20.477 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:20.477 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.477 20:30:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=82757 00:14:20.477 20:30:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:20.477 20:30:41 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 82757 00:14:20.477 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82757 ']' 00:14:20.477 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.477 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.477 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.477 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.477 20:30:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.736 20:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.736 20:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:20.736 20:30:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:20.736 20:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:20.736 20:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.736 20:30:42 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.736 20:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:20.736 20:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 82757 00:14:20.736 20:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 82757 ']' 00:14:20.736 20:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.736 20:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.736 20:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.736 20:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.736 20:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.303 20:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:21.303 20:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:21.303 20:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:14:21.303 20:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.303 20:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.303 20:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.303 20:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:14:21.303 20:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:21.303 20:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:14:21.303 20:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:21.303 20:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:21.303 20:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.303 20:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key3 00:14:21.303 20:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.303 20:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.303 20:30:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.303 20:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:21.303 20:30:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:22.236 00:14:22.236 20:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.236 20:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.236 20:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.236 20:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.236 20:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.236 20:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.236 20:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.236 20:30:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.236 20:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.236 { 00:14:22.236 "auth": { 00:14:22.236 "dhgroup": "ffdhe8192", 00:14:22.236 "digest": "sha512", 00:14:22.236 "state": "completed" 00:14:22.236 }, 00:14:22.236 "cntlid": 1, 00:14:22.236 "listen_address": { 00:14:22.236 "adrfam": "IPv4", 00:14:22.236 "traddr": "10.0.0.2", 00:14:22.236 "trsvcid": "4420", 00:14:22.236 "trtype": "TCP" 00:14:22.236 }, 00:14:22.236 "peer_address": { 00:14:22.236 "adrfam": "IPv4", 00:14:22.236 "traddr": "10.0.0.1", 00:14:22.236 "trsvcid": "55866", 00:14:22.236 "trtype": "TCP" 00:14:22.236 }, 00:14:22.236 "qid": 0, 00:14:22.236 "state": "enabled", 00:14:22.236 "thread": "nvmf_tgt_poll_group_000" 00:14:22.236 } 00:14:22.236 ]' 00:14:22.236 20:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.495 20:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:22.495 20:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.495 20:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:22.495 20:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.495 20:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.495 20:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.495 20:30:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.752 20:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-secret DHHC-1:03:NzU4MDU0ZjNlNDZkZGJhYmE5MTc1MzczOTNjODRmNmNlNDE3Y2FhY2NmZjRkNTdhODc4YzRmMzRiMTc5OTY2M/hKznc=: 00:14:23.685 20:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.685 20:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:14:23.685 20:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.685 20:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.685 20:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.685 20:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --dhchap-key key3 00:14:23.685 20:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.685 20:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.685 20:30:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.685 20:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:14:23.685 20:30:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:14:24.011 20:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:24.011 20:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:24.011 20:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:24.011 20:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:24.011 20:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:24.011 20:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:24.011 20:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:24.011 20:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:24.011 20:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:24.269 2024/07/15 20:30:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:24.269 request: 00:14:24.269 { 00:14:24.269 "method": "bdev_nvme_attach_controller", 00:14:24.269 "params": { 00:14:24.269 "name": "nvme0", 00:14:24.269 "trtype": "tcp", 00:14:24.269 "traddr": "10.0.0.2", 00:14:24.269 "adrfam": "ipv4", 00:14:24.269 "trsvcid": "4420", 00:14:24.269 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:24.269 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5", 00:14:24.269 "prchk_reftag": false, 00:14:24.269 "prchk_guard": false, 00:14:24.269 "hdgst": false, 00:14:24.269 "ddgst": false, 00:14:24.269 "dhchap_key": "key3" 00:14:24.269 } 00:14:24.269 } 00:14:24.269 Got JSON-RPC error response 00:14:24.269 GoRPCClient: error on JSON-RPC call 00:14:24.269 20:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:24.269 20:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:24.269 20:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:24.269 20:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:24.269 20:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:14:24.269 20:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:14:24.269 20:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:24.269 20:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:24.527 20:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:24.527 20:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:24.527 20:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:24.527 20:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:24.527 20:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:24.527 20:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:24.527 20:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:24.527 20:30:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:24.527 20:30:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:25.093 2024/07/15 20:30:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:25.093 request: 00:14:25.093 { 00:14:25.093 "method": "bdev_nvme_attach_controller", 00:14:25.093 "params": { 00:14:25.093 "name": "nvme0", 00:14:25.093 "trtype": "tcp", 00:14:25.093 "traddr": "10.0.0.2", 00:14:25.093 "adrfam": "ipv4", 00:14:25.093 "trsvcid": "4420", 00:14:25.093 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:25.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5", 00:14:25.093 "prchk_reftag": false, 00:14:25.093 "prchk_guard": false, 00:14:25.093 "hdgst": false, 00:14:25.093 "ddgst": false, 00:14:25.093 "dhchap_key": "key3" 00:14:25.093 } 00:14:25.093 } 00:14:25.093 Got JSON-RPC error response 00:14:25.093 GoRPCClient: error on JSON-RPC call 00:14:25.093 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:25.093 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:25.093 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:25.093 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:25.093 20:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:14:25.093 20:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:14:25.093 20:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:14:25.093 20:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:25.093 20:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:25.093 20:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:25.351 20:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:14:25.351 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.351 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.351 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.351 20:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:14:25.351 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.351 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.351 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.352 20:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:25.352 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:14:25.352 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:25.352 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:14:25.352 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.352 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:14:25.352 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.352 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:25.352 20:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:25.610 2024/07/15 20:30:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:25.610 request: 00:14:25.610 { 00:14:25.610 "method": "bdev_nvme_attach_controller", 00:14:25.610 "params": { 00:14:25.610 "name": "nvme0", 00:14:25.610 "trtype": "tcp", 00:14:25.610 "traddr": "10.0.0.2", 00:14:25.610 "adrfam": "ipv4", 00:14:25.610 "trsvcid": "4420", 00:14:25.610 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:25.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5", 00:14:25.610 "prchk_reftag": false, 00:14:25.610 "prchk_guard": false, 00:14:25.610 "hdgst": false, 00:14:25.610 "ddgst": false, 00:14:25.610 "dhchap_key": "key0", 00:14:25.610 "dhchap_ctrlr_key": "key1" 00:14:25.610 } 00:14:25.610 } 00:14:25.610 Got JSON-RPC error response 00:14:25.610 GoRPCClient: error on JSON-RPC call 00:14:25.610 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:14:25.610 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:25.610 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:25.610 20:30:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:25.610 20:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:25.610 20:30:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:14:25.868 00:14:25.868 20:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:14:25.868 20:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.868 20:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:14:26.435 20:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.435 20:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.435 20:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.435 20:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:14:26.435 20:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:14:26.435 20:30:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 77808 00:14:26.435 20:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 77808 ']' 00:14:26.435 20:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 77808 00:14:26.435 20:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:26.694 20:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:26.694 20:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77808 00:14:26.694 killing process with pid 77808 00:14:26.694 20:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:26.694 20:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:26.694 20:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77808' 00:14:26.694 20:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 77808 00:14:26.694 20:30:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 77808 00:14:26.953 20:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:26.953 20:30:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:26.953 20:30:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:14:26.953 20:30:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:26.953 20:30:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:14:26.953 20:30:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:26.953 20:30:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:26.953 rmmod nvme_tcp 00:14:26.953 rmmod nvme_fabrics 00:14:26.953 rmmod nvme_keyring 00:14:26.953 20:30:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:26.953 20:30:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:14:26.953 20:30:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:14:26.953 20:30:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 82757 ']' 00:14:26.954 20:30:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 82757 00:14:26.954 20:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 82757 ']' 00:14:26.954 20:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 82757 00:14:26.954 20:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:14:26.954 20:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:26.954 20:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82757 00:14:26.954 killing process with pid 82757 00:14:26.954 20:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:26.954 20:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:26.954 20:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82757' 00:14:26.954 20:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 82757 00:14:26.954 20:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 82757 00:14:27.213 20:30:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:27.213 20:30:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:27.213 20:30:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:27.213 20:30:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:27.213 20:30:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:27.213 20:30:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.213 20:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.213 20:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.213 20:30:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:27.213 20:30:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.tAh /tmp/spdk.key-sha256.CLP /tmp/spdk.key-sha384.17D /tmp/spdk.key-sha512.xrd /tmp/spdk.key-sha512.FU4 /tmp/spdk.key-sha384.ufv /tmp/spdk.key-sha256.qKb '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:27.213 00:14:27.213 real 3m1.107s 00:14:27.213 user 7m22.214s 00:14:27.213 sys 0m21.851s 00:14:27.213 20:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:27.213 20:30:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.213 ************************************ 00:14:27.213 END TEST nvmf_auth_target 00:14:27.213 ************************************ 00:14:27.213 20:30:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:27.213 20:30:48 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:14:27.213 20:30:48 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:27.213 20:30:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:27.213 20:30:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.213 20:30:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:27.213 ************************************ 00:14:27.213 START TEST nvmf_bdevio_no_huge 00:14:27.213 ************************************ 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:27.213 * Looking for test storage... 00:14:27.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.213 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:27.214 Cannot find device "nvmf_tgt_br" 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:14:27.214 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:27.473 Cannot find device "nvmf_tgt_br2" 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:27.473 Cannot find device "nvmf_tgt_br" 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:27.473 Cannot find device "nvmf_tgt_br2" 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:27.473 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:27.473 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:27.473 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:27.733 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:27.733 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:27.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:14:27.733 00:14:27.733 --- 10.0.0.2 ping statistics --- 00:14:27.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.733 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:14:27.733 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:27.733 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:27.733 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:14:27.733 00:14:27.733 --- 10.0.0.3 ping statistics --- 00:14:27.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.733 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:27.733 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:27.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:27.733 00:14:27.733 --- 10.0.0.1 ping statistics --- 00:14:27.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.733 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:27.733 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.733 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:14:27.733 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:27.733 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.733 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:27.733 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:27.733 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.733 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:27.733 20:30:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:27.733 20:30:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:27.733 20:30:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:27.733 20:30:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:27.733 20:30:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:27.733 20:30:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=83157 00:14:27.733 20:30:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:27.733 20:30:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 83157 00:14:27.733 20:30:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 83157 ']' 00:14:27.733 20:30:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.733 20:30:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:27.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.733 20:30:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.733 20:30:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:27.733 20:30:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:27.733 [2024-07-15 20:30:49.073714] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:14:27.733 [2024-07-15 20:30:49.073804] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:27.733 [2024-07-15 20:30:49.215765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:27.991 [2024-07-15 20:30:49.377933] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.991 [2024-07-15 20:30:49.378147] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.991 [2024-07-15 20:30:49.378299] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.991 [2024-07-15 20:30:49.378469] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.991 [2024-07-15 20:30:49.378558] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.991 [2024-07-15 20:30:49.378770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:27.991 [2024-07-15 20:30:49.379055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:27.991 [2024-07-15 20:30:49.379124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:27.991 [2024-07-15 20:30:49.379251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:28.927 [2024-07-15 20:30:50.212066] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:28.927 Malloc0 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:28.927 [2024-07-15 20:30:50.249814] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:28.927 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:28.927 { 00:14:28.927 "params": { 00:14:28.927 "name": "Nvme$subsystem", 00:14:28.927 "trtype": "$TEST_TRANSPORT", 00:14:28.927 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:28.927 "adrfam": "ipv4", 00:14:28.927 "trsvcid": "$NVMF_PORT", 00:14:28.927 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:28.927 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:28.927 "hdgst": ${hdgst:-false}, 00:14:28.927 "ddgst": ${ddgst:-false} 00:14:28.927 }, 00:14:28.928 "method": "bdev_nvme_attach_controller" 00:14:28.928 } 00:14:28.928 EOF 00:14:28.928 )") 00:14:28.928 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:14:28.928 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:14:28.928 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:14:28.928 20:30:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:28.928 "params": { 00:14:28.928 "name": "Nvme1", 00:14:28.928 "trtype": "tcp", 00:14:28.928 "traddr": "10.0.0.2", 00:14:28.928 "adrfam": "ipv4", 00:14:28.928 "trsvcid": "4420", 00:14:28.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:28.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:28.928 "hdgst": false, 00:14:28.928 "ddgst": false 00:14:28.928 }, 00:14:28.928 "method": "bdev_nvme_attach_controller" 00:14:28.928 }' 00:14:28.928 [2024-07-15 20:30:50.317467] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:14:28.928 [2024-07-15 20:30:50.317581] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid83217 ] 00:14:29.186 [2024-07-15 20:30:50.474526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:29.186 [2024-07-15 20:30:50.622110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.186 [2024-07-15 20:30:50.622184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.186 [2024-07-15 20:30:50.622189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.444 I/O targets: 00:14:29.444 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:29.444 00:14:29.444 00:14:29.444 CUnit - A unit testing framework for C - Version 2.1-3 00:14:29.444 http://cunit.sourceforge.net/ 00:14:29.444 00:14:29.444 00:14:29.444 Suite: bdevio tests on: Nvme1n1 00:14:29.444 Test: blockdev write read block ...passed 00:14:29.444 Test: blockdev write zeroes read block ...passed 00:14:29.444 Test: blockdev write zeroes read no split ...passed 00:14:29.444 Test: blockdev write zeroes read split ...passed 00:14:29.444 Test: blockdev write zeroes read split partial ...passed 00:14:29.444 Test: blockdev reset ...[2024-07-15 20:30:50.942410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:29.444 [2024-07-15 20:30:50.942533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fcd460 (9): Bad file descriptor 00:14:29.703 passed 00:14:29.703 Test: blockdev write read 8 blocks ...[2024-07-15 20:30:50.963091] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:29.703 passed 00:14:29.703 Test: blockdev write read size > 128k ...passed 00:14:29.703 Test: blockdev write read invalid size ...passed 00:14:29.703 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:29.703 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:29.703 Test: blockdev write read max offset ...passed 00:14:29.703 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:29.703 Test: blockdev writev readv 8 blocks ...passed 00:14:29.703 Test: blockdev writev readv 30 x 1block ...passed 00:14:29.703 Test: blockdev writev readv block ...passed 00:14:29.703 Test: blockdev writev readv size > 128k ...passed 00:14:29.703 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:29.703 Test: blockdev comparev and writev ...[2024-07-15 20:30:51.138090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:29.703 [2024-07-15 20:30:51.138149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:29.703 [2024-07-15 20:30:51.138170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:29.703 [2024-07-15 20:30:51.138181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:29.703 [2024-07-15 20:30:51.138543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:29.703 [2024-07-15 20:30:51.138569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:29.703 [2024-07-15 20:30:51.138589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:29.703 [2024-07-15 20:30:51.138599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:29.703 [2024-07-15 20:30:51.138896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:29.703 [2024-07-15 20:30:51.138915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:29.703 [2024-07-15 20:30:51.138931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:29.703 [2024-07-15 20:30:51.138943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:29.703 [2024-07-15 20:30:51.139231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:29.703 [2024-07-15 20:30:51.139248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:29.703 [2024-07-15 20:30:51.139264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:29.703 [2024-07-15 20:30:51.139274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:29.703 passed 00:14:29.962 Test: blockdev nvme passthru rw ...passed 00:14:29.962 Test: blockdev nvme passthru vendor specific ...[2024-07-15 20:30:51.221230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:29.962 [2024-07-15 20:30:51.221271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:29.962 passed 00:14:29.962 Test: blockdev nvme admin passthru ...[2024-07-15 20:30:51.221639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:29.962 [2024-07-15 20:30:51.221666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:29.962 [2024-07-15 20:30:51.221795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:29.962 [2024-07-15 20:30:51.221813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:29.962 [2024-07-15 20:30:51.221953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:29.962 [2024-07-15 20:30:51.221971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:29.962 passed 00:14:29.962 Test: blockdev copy ...passed 00:14:29.962 00:14:29.962 Run Summary: Type Total Ran Passed Failed Inactive 00:14:29.962 suites 1 1 n/a 0 0 00:14:29.962 tests 23 23 23 0 0 00:14:29.962 asserts 152 152 152 0 n/a 00:14:29.962 00:14:29.962 Elapsed time = 0.932 seconds 00:14:30.220 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.220 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.220 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:30.220 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.220 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:30.220 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:30.220 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:30.220 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:14:30.220 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:30.220 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:14:30.220 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:30.220 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:30.478 rmmod nvme_tcp 00:14:30.478 rmmod nvme_fabrics 00:14:30.478 rmmod nvme_keyring 00:14:30.478 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:30.478 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:14:30.478 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:14:30.478 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 83157 ']' 00:14:30.478 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 83157 00:14:30.478 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 83157 ']' 00:14:30.478 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 83157 00:14:30.478 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:14:30.478 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:30.478 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83157 00:14:30.478 killing process with pid 83157 00:14:30.478 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:14:30.478 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:14:30.478 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83157' 00:14:30.478 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 83157 00:14:30.478 20:30:51 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 83157 00:14:30.736 20:30:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:30.736 20:30:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:30.736 20:30:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:30.736 20:30:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:30.736 20:30:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:30.736 20:30:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.736 20:30:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.736 20:30:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.736 20:30:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:30.736 00:14:30.736 real 0m3.631s 00:14:30.736 user 0m13.307s 00:14:30.736 sys 0m1.300s 00:14:30.736 20:30:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:30.736 20:30:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:30.736 ************************************ 00:14:30.736 END TEST nvmf_bdevio_no_huge 00:14:30.736 ************************************ 00:14:30.994 20:30:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:30.994 20:30:52 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:30.994 20:30:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:30.994 20:30:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:30.994 20:30:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:30.994 ************************************ 00:14:30.994 START TEST nvmf_tls 00:14:30.994 ************************************ 00:14:30.994 20:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:30.994 * Looking for test storage... 00:14:30.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:30.994 20:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:30.994 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:30.994 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.994 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.994 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.994 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.994 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.994 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.994 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:30.995 Cannot find device "nvmf_tgt_br" 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:30.995 Cannot find device "nvmf_tgt_br2" 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:30.995 Cannot find device "nvmf_tgt_br" 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:30.995 Cannot find device "nvmf_tgt_br2" 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:30.995 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:30.995 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:30.995 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:31.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:14:31.254 00:14:31.254 --- 10.0.0.2 ping statistics --- 00:14:31.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.254 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:31.254 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:31.254 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:14:31.254 00:14:31.254 --- 10.0.0.3 ping statistics --- 00:14:31.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.254 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:31.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:14:31.254 00:14:31.254 --- 10.0.0.1 ping statistics --- 00:14:31.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.254 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83405 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83405 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83405 ']' 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.254 20:30:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:31.512 [2024-07-15 20:30:52.768293] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:14:31.512 [2024-07-15 20:30:52.768399] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.512 [2024-07-15 20:30:52.910086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.512 [2024-07-15 20:30:52.981903] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.512 [2024-07-15 20:30:52.981965] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.512 [2024-07-15 20:30:52.981980] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.512 [2024-07-15 20:30:52.981990] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.512 [2024-07-15 20:30:52.981998] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.512 [2024-07-15 20:30:52.982027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.442 20:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.442 20:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:32.442 20:30:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:32.442 20:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.442 20:30:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:32.442 20:30:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.442 20:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:14:32.442 20:30:53 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:32.700 true 00:14:32.700 20:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:32.700 20:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:14:33.265 20:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:14:33.265 20:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:14:33.265 20:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:33.265 20:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:33.265 20:30:54 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:14:33.830 20:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:14:33.830 20:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:14:33.830 20:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:34.088 20:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:34.088 20:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:14:34.346 20:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:14:34.346 20:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:14:34.346 20:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:34.346 20:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:14:34.602 20:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:14:34.602 20:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:14:34.602 20:30:55 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:34.860 20:30:56 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:34.860 20:30:56 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:14:35.117 20:30:56 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:14:35.117 20:30:56 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:14:35.117 20:30:56 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:35.375 20:30:56 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:14:35.375 20:30:56 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:35.634 20:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:14:35.634 20:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:14:35.634 20:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:35.634 20:30:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:35.634 20:30:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:35.634 20:30:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:35.634 20:30:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:14:35.634 20:30:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:35.634 20:30:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:35.634 20:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:35.634 20:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:35.634 20:30:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:35.634 20:30:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:35.634 20:30:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:35.634 20:30:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:14:35.634 20:30:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:35.634 20:30:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:35.893 20:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:35.893 20:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:14:35.893 20:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.qYPduh2kEd 00:14:35.893 20:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:35.893 20:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.L3lqx161nn 00:14:35.893 20:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:35.893 20:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:35.893 20:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.qYPduh2kEd 00:14:35.893 20:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.L3lqx161nn 00:14:35.893 20:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:36.151 20:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:36.408 20:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.qYPduh2kEd 00:14:36.408 20:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.qYPduh2kEd 00:14:36.408 20:30:57 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:36.666 [2024-07-15 20:30:58.004254] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.666 20:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:36.924 20:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:37.181 [2024-07-15 20:30:58.576380] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:37.181 [2024-07-15 20:30:58.576599] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.181 20:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:37.438 malloc0 00:14:37.438 20:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:37.694 20:30:59 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qYPduh2kEd 00:14:37.950 [2024-07-15 20:30:59.319769] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:37.950 20:30:59 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.qYPduh2kEd 00:14:50.138 Initializing NVMe Controllers 00:14:50.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:50.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:50.138 Initialization complete. Launching workers. 00:14:50.138 ======================================================== 00:14:50.138 Latency(us) 00:14:50.138 Device Information : IOPS MiB/s Average min max 00:14:50.138 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8796.79 34.36 7277.10 1988.58 21104.05 00:14:50.138 ======================================================== 00:14:50.138 Total : 8796.79 34.36 7277.10 1988.58 21104.05 00:14:50.138 00:14:50.138 20:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qYPduh2kEd 00:14:50.138 20:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:50.138 20:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:50.138 20:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:50.138 20:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qYPduh2kEd' 00:14:50.138 20:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:50.138 20:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83768 00:14:50.138 20:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:50.138 20:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83768 /var/tmp/bdevperf.sock 00:14:50.138 20:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83768 ']' 00:14:50.138 20:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:50.138 20:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:50.138 20:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:50.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:50.138 20:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:50.138 20:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:50.138 20:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.138 [2024-07-15 20:31:09.591762] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:14:50.138 [2024-07-15 20:31:09.591859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83768 ] 00:14:50.138 [2024-07-15 20:31:09.723283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.138 [2024-07-15 20:31:09.812099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.138 20:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:50.138 20:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:50.138 20:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qYPduh2kEd 00:14:50.138 [2024-07-15 20:31:10.949077] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:50.138 [2024-07-15 20:31:10.949251] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:50.138 TLSTESTn1 00:14:50.138 20:31:11 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:50.138 Running I/O for 10 seconds... 00:15:00.117 00:15:00.117 Latency(us) 00:15:00.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.117 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:00.117 Verification LBA range: start 0x0 length 0x2000 00:15:00.117 TLSTESTn1 : 10.02 2742.86 10.71 0.00 0.00 46579.87 7268.54 47662.55 00:15:00.117 =================================================================================================================== 00:15:00.117 Total : 2742.86 10.71 0.00 0.00 46579.87 7268.54 47662.55 00:15:00.117 0 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 83768 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83768 ']' 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83768 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83768 00:15:00.117 killing process with pid 83768 00:15:00.117 Received shutdown signal, test time was about 10.000000 seconds 00:15:00.117 00:15:00.117 Latency(us) 00:15:00.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.117 =================================================================================================================== 00:15:00.117 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83768' 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83768 00:15:00.117 [2024-07-15 20:31:21.250751] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83768 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L3lqx161nn 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L3lqx161nn 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.L3lqx161nn 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.L3lqx161nn' 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:00.117 20:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83914 00:15:00.118 20:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:00.118 20:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:00.118 20:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83914 /var/tmp/bdevperf.sock 00:15:00.118 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83914 ']' 00:15:00.118 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:00.118 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:00.118 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:00.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:00.118 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:00.118 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.118 [2024-07-15 20:31:21.493051] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:00.118 [2024-07-15 20:31:21.493153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83914 ] 00:15:00.376 [2024-07-15 20:31:21.627647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.376 [2024-07-15 20:31:21.688694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.376 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:00.376 20:31:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:00.376 20:31:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.L3lqx161nn 00:15:00.634 [2024-07-15 20:31:22.025082] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:00.635 [2024-07-15 20:31:22.025201] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:00.635 [2024-07-15 20:31:22.030573] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:00.635 [2024-07-15 20:31:22.031122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4aca0 (107): Transport endpoint is not connected 00:15:00.635 [2024-07-15 20:31:22.032100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a4aca0 (9): Bad file descriptor 00:15:00.635 [2024-07-15 20:31:22.033095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:00.635 [2024-07-15 20:31:22.033132] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:00.635 [2024-07-15 20:31:22.033150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:00.635 2024/07/15 20:31:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.L3lqx161nn subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:00.635 request: 00:15:00.635 { 00:15:00.635 "method": "bdev_nvme_attach_controller", 00:15:00.635 "params": { 00:15:00.635 "name": "TLSTEST", 00:15:00.635 "trtype": "tcp", 00:15:00.635 "traddr": "10.0.0.2", 00:15:00.635 "adrfam": "ipv4", 00:15:00.635 "trsvcid": "4420", 00:15:00.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.635 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:00.635 "prchk_reftag": false, 00:15:00.635 "prchk_guard": false, 00:15:00.635 "hdgst": false, 00:15:00.635 "ddgst": false, 00:15:00.635 "psk": "/tmp/tmp.L3lqx161nn" 00:15:00.635 } 00:15:00.635 } 00:15:00.635 Got JSON-RPC error response 00:15:00.635 GoRPCClient: error on JSON-RPC call 00:15:00.635 20:31:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83914 00:15:00.635 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83914 ']' 00:15:00.635 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83914 00:15:00.635 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:00.635 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:00.635 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83914 00:15:00.635 killing process with pid 83914 00:15:00.635 Received shutdown signal, test time was about 10.000000 seconds 00:15:00.635 00:15:00.635 Latency(us) 00:15:00.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.635 =================================================================================================================== 00:15:00.635 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:00.635 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:00.635 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:00.635 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83914' 00:15:00.635 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83914 00:15:00.635 [2024-07-15 20:31:22.083449] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:00.635 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83914 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qYPduh2kEd 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qYPduh2kEd 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qYPduh2kEd 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:00.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qYPduh2kEd' 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83945 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83945 /var/tmp/bdevperf.sock 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83945 ']' 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:00.923 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.923 [2024-07-15 20:31:22.306973] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:00.923 [2024-07-15 20:31:22.307079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83945 ] 00:15:01.181 [2024-07-15 20:31:22.466206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.181 [2024-07-15 20:31:22.554289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.181 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:01.181 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:01.181 20:31:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.qYPduh2kEd 00:15:01.439 [2024-07-15 20:31:22.897900] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:01.439 [2024-07-15 20:31:22.898574] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:01.439 [2024-07-15 20:31:22.909322] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:01.439 [2024-07-15 20:31:22.909382] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:15:01.439 [2024-07-15 20:31:22.909462] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:01.439 [2024-07-15 20:31:22.910223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f2ca0 (107): Transport endpoint is not connected 00:15:01.439 [2024-07-15 20:31:22.911194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6f2ca0 (9): Bad file descriptor 00:15:01.439 [2024-07-15 20:31:22.912190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:01.439 [2024-07-15 20:31:22.912385] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:01.439 [2024-07-15 20:31:22.912429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:01.439 2024/07/15 20:31:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.qYPduh2kEd subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:01.439 request: 00:15:01.439 { 00:15:01.439 "method": "bdev_nvme_attach_controller", 00:15:01.439 "params": { 00:15:01.439 "name": "TLSTEST", 00:15:01.439 "trtype": "tcp", 00:15:01.439 "traddr": "10.0.0.2", 00:15:01.439 "adrfam": "ipv4", 00:15:01.440 "trsvcid": "4420", 00:15:01.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.440 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:01.440 "prchk_reftag": false, 00:15:01.440 "prchk_guard": false, 00:15:01.440 "hdgst": false, 00:15:01.440 "ddgst": false, 00:15:01.440 "psk": "/tmp/tmp.qYPduh2kEd" 00:15:01.440 } 00:15:01.440 } 00:15:01.440 Got JSON-RPC error response 00:15:01.440 GoRPCClient: error on JSON-RPC call 00:15:01.440 20:31:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83945 00:15:01.440 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83945 ']' 00:15:01.440 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83945 00:15:01.440 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:01.440 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.698 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83945 00:15:01.698 killing process with pid 83945 00:15:01.698 Received shutdown signal, test time was about 10.000000 seconds 00:15:01.698 00:15:01.698 Latency(us) 00:15:01.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.698 =================================================================================================================== 00:15:01.698 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:01.698 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:01.698 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:01.698 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83945' 00:15:01.698 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83945 00:15:01.698 [2024-07-15 20:31:22.961174] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:01.698 20:31:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83945 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qYPduh2kEd 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qYPduh2kEd 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qYPduh2kEd 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qYPduh2kEd' 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83973 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83973 /var/tmp/bdevperf.sock 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 83973 ']' 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:01.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:01.698 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:01.955 [2024-07-15 20:31:23.216958] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:01.955 [2024-07-15 20:31:23.217092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83973 ] 00:15:01.955 [2024-07-15 20:31:23.358473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.955 [2024-07-15 20:31:23.418919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.212 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.212 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:02.212 20:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qYPduh2kEd 00:15:02.470 [2024-07-15 20:31:23.847510] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:02.470 [2024-07-15 20:31:23.847621] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:02.470 [2024-07-15 20:31:23.859487] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:02.470 [2024-07-15 20:31:23.859541] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:15:02.470 [2024-07-15 20:31:23.859617] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:02.470 [2024-07-15 20:31:23.860513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcaaca0 (107): Transport endpoint is not connected 00:15:02.470 [2024-07-15 20:31:23.861489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcaaca0 (9): Bad file descriptor 00:15:02.470 [2024-07-15 20:31:23.862483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:15:02.470 [2024-07-15 20:31:23.862517] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:02.470 [2024-07-15 20:31:23.862534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:15:02.470 2024/07/15 20:31:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.qYPduh2kEd subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:02.470 request: 00:15:02.470 { 00:15:02.470 "method": "bdev_nvme_attach_controller", 00:15:02.470 "params": { 00:15:02.470 "name": "TLSTEST", 00:15:02.470 "trtype": "tcp", 00:15:02.470 "traddr": "10.0.0.2", 00:15:02.470 "adrfam": "ipv4", 00:15:02.470 "trsvcid": "4420", 00:15:02.470 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:02.470 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:02.470 "prchk_reftag": false, 00:15:02.470 "prchk_guard": false, 00:15:02.470 "hdgst": false, 00:15:02.470 "ddgst": false, 00:15:02.470 "psk": "/tmp/tmp.qYPduh2kEd" 00:15:02.470 } 00:15:02.470 } 00:15:02.470 Got JSON-RPC error response 00:15:02.470 GoRPCClient: error on JSON-RPC call 00:15:02.470 20:31:23 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83973 00:15:02.470 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83973 ']' 00:15:02.470 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83973 00:15:02.470 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:02.470 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:02.470 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83973 00:15:02.470 killing process with pid 83973 00:15:02.470 Received shutdown signal, test time was about 10.000000 seconds 00:15:02.470 00:15:02.470 Latency(us) 00:15:02.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.470 =================================================================================================================== 00:15:02.470 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:02.470 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:02.470 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:02.470 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83973' 00:15:02.470 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83973 00:15:02.470 [2024-07-15 20:31:23.909478] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:02.470 20:31:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83973 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84005 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84005 /var/tmp/bdevperf.sock 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84005 ']' 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:02.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:02.728 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.728 [2024-07-15 20:31:24.127992] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:02.728 [2024-07-15 20:31:24.128087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84005 ] 00:15:02.986 [2024-07-15 20:31:24.282395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.986 [2024-07-15 20:31:24.372029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.986 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.986 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:02.986 20:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:03.553 [2024-07-15 20:31:24.830078] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:03.553 [2024-07-15 20:31:24.831809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1717240 (9): Bad file descriptor 00:15:03.553 [2024-07-15 20:31:24.832826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:15:03.553 [2024-07-15 20:31:24.832901] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:03.553 [2024-07-15 20:31:24.832934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:03.553 2024/07/15 20:31:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:03.553 request: 00:15:03.553 { 00:15:03.553 "method": "bdev_nvme_attach_controller", 00:15:03.553 "params": { 00:15:03.553 "name": "TLSTEST", 00:15:03.553 "trtype": "tcp", 00:15:03.553 "traddr": "10.0.0.2", 00:15:03.553 "adrfam": "ipv4", 00:15:03.553 "trsvcid": "4420", 00:15:03.553 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.553 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:03.553 "prchk_reftag": false, 00:15:03.553 "prchk_guard": false, 00:15:03.553 "hdgst": false, 00:15:03.553 "ddgst": false 00:15:03.553 } 00:15:03.553 } 00:15:03.553 Got JSON-RPC error response 00:15:03.553 GoRPCClient: error on JSON-RPC call 00:15:03.553 20:31:24 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84005 00:15:03.553 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84005 ']' 00:15:03.553 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84005 00:15:03.553 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:03.553 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:03.553 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84005 00:15:03.553 killing process with pid 84005 00:15:03.553 Received shutdown signal, test time was about 10.000000 seconds 00:15:03.553 00:15:03.553 Latency(us) 00:15:03.553 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.553 =================================================================================================================== 00:15:03.553 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:03.553 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:03.553 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:03.553 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84005' 00:15:03.553 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84005 00:15:03.553 20:31:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84005 00:15:03.811 20:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:03.811 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:03.811 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:03.811 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:03.811 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:03.811 20:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 83405 00:15:03.811 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 83405 ']' 00:15:03.811 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 83405 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83405 00:15:03.812 killing process with pid 83405 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83405' 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 83405 00:15:03.812 [2024-07-15 20:31:25.082256] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 83405 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.q2MvlIhZ62 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.q2MvlIhZ62 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:03.812 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:04.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.070 20:31:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84047 00:15:04.070 20:31:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84047 00:15:04.070 20:31:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:04.070 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84047 ']' 00:15:04.070 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.070 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:04.070 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.070 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:04.070 20:31:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:04.070 [2024-07-15 20:31:25.386786] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:04.070 [2024-07-15 20:31:25.386957] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.070 [2024-07-15 20:31:25.550555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.328 [2024-07-15 20:31:25.636516] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.328 [2024-07-15 20:31:25.636592] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.328 [2024-07-15 20:31:25.636609] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.328 [2024-07-15 20:31:25.636623] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.328 [2024-07-15 20:31:25.636634] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.328 [2024-07-15 20:31:25.636684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.894 20:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:04.894 20:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:04.894 20:31:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:04.894 20:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:04.894 20:31:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:04.894 20:31:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.894 20:31:26 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.q2MvlIhZ62 00:15:04.894 20:31:26 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.q2MvlIhZ62 00:15:04.894 20:31:26 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:05.460 [2024-07-15 20:31:26.692822] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.460 20:31:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:05.718 20:31:26 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:05.976 [2024-07-15 20:31:27.344982] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:05.976 [2024-07-15 20:31:27.345199] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.976 20:31:27 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:06.234 malloc0 00:15:06.234 20:31:27 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:06.492 20:31:27 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.q2MvlIhZ62 00:15:06.750 [2024-07-15 20:31:28.120098] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:06.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:06.750 20:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.q2MvlIhZ62 00:15:06.750 20:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:06.750 20:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:06.750 20:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:06.750 20:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.q2MvlIhZ62' 00:15:06.750 20:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:06.750 20:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84151 00:15:06.750 20:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:06.750 20:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:06.750 20:31:28 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84151 /var/tmp/bdevperf.sock 00:15:06.750 20:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84151 ']' 00:15:06.750 20:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:06.750 20:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.750 20:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:06.750 20:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.750 20:31:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.750 [2024-07-15 20:31:28.217830] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:06.750 [2024-07-15 20:31:28.217981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84151 ] 00:15:07.007 [2024-07-15 20:31:28.350907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.007 [2024-07-15 20:31:28.413679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:07.941 20:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.941 20:31:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:07.941 20:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.q2MvlIhZ62 00:15:08.199 [2024-07-15 20:31:29.555928] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:08.199 [2024-07-15 20:31:29.556906] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:08.199 TLSTESTn1 00:15:08.199 20:31:29 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:08.457 Running I/O for 10 seconds... 00:15:18.424 00:15:18.424 Latency(us) 00:15:18.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.424 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:18.424 Verification LBA range: start 0x0 length 0x2000 00:15:18.424 TLSTESTn1 : 10.02 3409.92 13.32 0.00 0.00 37464.47 7506.85 43134.60 00:15:18.425 =================================================================================================================== 00:15:18.425 Total : 3409.92 13.32 0.00 0.00 37464.47 7506.85 43134.60 00:15:18.425 0 00:15:18.425 20:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:18.425 20:31:39 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 84151 00:15:18.425 20:31:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84151 ']' 00:15:18.425 20:31:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84151 00:15:18.425 20:31:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:18.425 20:31:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:18.425 20:31:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84151 00:15:18.425 killing process with pid 84151 00:15:18.425 Received shutdown signal, test time was about 10.000000 seconds 00:15:18.425 00:15:18.425 Latency(us) 00:15:18.425 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.425 =================================================================================================================== 00:15:18.425 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:18.425 20:31:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:18.425 20:31:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:18.425 20:31:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84151' 00:15:18.425 20:31:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84151 00:15:18.425 [2024-07-15 20:31:39.838984] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:18.425 20:31:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84151 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.q2MvlIhZ62 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.q2MvlIhZ62 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.q2MvlIhZ62 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.q2MvlIhZ62 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:18.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.q2MvlIhZ62' 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84303 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84303 /var/tmp/bdevperf.sock 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84303 ']' 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:18.690 20:31:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:18.690 [2024-07-15 20:31:40.102306] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:18.690 [2024-07-15 20:31:40.102452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84303 ] 00:15:18.954 [2024-07-15 20:31:40.255018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.954 [2024-07-15 20:31:40.340701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.887 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:19.887 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:19.887 20:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.q2MvlIhZ62 00:15:20.145 [2024-07-15 20:31:41.526080] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:20.145 [2024-07-15 20:31:41.526167] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:20.145 [2024-07-15 20:31:41.526180] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.q2MvlIhZ62 00:15:20.145 2024/07/15 20:31:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.q2MvlIhZ62 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:15:20.145 request: 00:15:20.145 { 00:15:20.145 "method": "bdev_nvme_attach_controller", 00:15:20.145 "params": { 00:15:20.145 "name": "TLSTEST", 00:15:20.145 "trtype": "tcp", 00:15:20.145 "traddr": "10.0.0.2", 00:15:20.145 "adrfam": "ipv4", 00:15:20.145 "trsvcid": "4420", 00:15:20.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:20.145 "prchk_reftag": false, 00:15:20.145 "prchk_guard": false, 00:15:20.145 "hdgst": false, 00:15:20.145 "ddgst": false, 00:15:20.145 "psk": "/tmp/tmp.q2MvlIhZ62" 00:15:20.145 } 00:15:20.145 } 00:15:20.145 Got JSON-RPC error response 00:15:20.145 GoRPCClient: error on JSON-RPC call 00:15:20.145 20:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 84303 00:15:20.145 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84303 ']' 00:15:20.145 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84303 00:15:20.145 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:20.145 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:20.145 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84303 00:15:20.145 killing process with pid 84303 00:15:20.145 Received shutdown signal, test time was about 10.000000 seconds 00:15:20.145 00:15:20.145 Latency(us) 00:15:20.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.145 =================================================================================================================== 00:15:20.145 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:20.145 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:20.145 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:20.146 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84303' 00:15:20.146 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84303 00:15:20.146 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84303 00:15:20.403 20:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:15:20.403 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:20.403 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:20.403 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:20.403 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:20.403 20:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 84047 00:15:20.403 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84047 ']' 00:15:20.403 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84047 00:15:20.403 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:20.403 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:20.404 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84047 00:15:20.404 killing process with pid 84047 00:15:20.404 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:20.404 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:20.404 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84047' 00:15:20.404 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84047 00:15:20.404 [2024-07-15 20:31:41.763426] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:20.404 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84047 00:15:20.662 20:31:41 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:15:20.662 20:31:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:20.662 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:20.662 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:20.662 20:31:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84353 00:15:20.662 20:31:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:20.662 20:31:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84353 00:15:20.662 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84353 ']' 00:15:20.662 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.662 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:20.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.662 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.662 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:20.662 20:31:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:20.662 [2024-07-15 20:31:42.064833] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:20.662 [2024-07-15 20:31:42.064977] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.920 [2024-07-15 20:31:42.206589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.920 [2024-07-15 20:31:42.294576] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.920 [2024-07-15 20:31:42.294660] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.920 [2024-07-15 20:31:42.294679] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.920 [2024-07-15 20:31:42.294692] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.920 [2024-07-15 20:31:42.294705] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.920 [2024-07-15 20:31:42.294743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.854 20:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:21.854 20:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:21.854 20:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:21.854 20:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:21.854 20:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:21.854 20:31:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.854 20:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.q2MvlIhZ62 00:15:21.854 20:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:15:21.854 20:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.q2MvlIhZ62 00:15:21.854 20:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:15:21.854 20:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:21.854 20:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:15:21.854 20:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:21.854 20:31:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.q2MvlIhZ62 00:15:21.854 20:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.q2MvlIhZ62 00:15:21.855 20:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:22.111 [2024-07-15 20:31:43.391332] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.111 20:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:22.368 20:31:43 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:22.626 [2024-07-15 20:31:43.975516] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:22.626 [2024-07-15 20:31:43.975844] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.626 20:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:22.883 malloc0 00:15:22.883 20:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:23.460 20:31:44 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.q2MvlIhZ62 00:15:23.718 [2024-07-15 20:31:45.050564] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:15:23.718 [2024-07-15 20:31:45.050610] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:15:23.718 [2024-07-15 20:31:45.050645] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:23.718 2024/07/15 20:31:45 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.q2MvlIhZ62], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:15:23.718 request: 00:15:23.718 { 00:15:23.718 "method": "nvmf_subsystem_add_host", 00:15:23.718 "params": { 00:15:23.718 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.718 "host": "nqn.2016-06.io.spdk:host1", 00:15:23.718 "psk": "/tmp/tmp.q2MvlIhZ62" 00:15:23.718 } 00:15:23.718 } 00:15:23.718 Got JSON-RPC error response 00:15:23.718 GoRPCClient: error on JSON-RPC call 00:15:23.718 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:15:23.718 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:23.718 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:23.718 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:23.718 20:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 84353 00:15:23.718 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84353 ']' 00:15:23.718 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84353 00:15:23.718 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:23.718 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:23.718 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84353 00:15:23.718 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:23.718 killing process with pid 84353 00:15:23.718 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:23.718 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84353' 00:15:23.718 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84353 00:15:23.718 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84353 00:15:23.976 20:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.q2MvlIhZ62 00:15:23.976 20:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:15:23.976 20:31:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:23.976 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:23.976 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:23.976 20:31:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84470 00:15:23.976 20:31:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84470 00:15:23.976 20:31:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:23.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.976 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84470 ']' 00:15:23.976 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.976 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:23.976 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.976 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:23.976 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:23.976 [2024-07-15 20:31:45.330954] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:23.976 [2024-07-15 20:31:45.331050] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.976 [2024-07-15 20:31:45.465178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.234 [2024-07-15 20:31:45.548433] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.234 [2024-07-15 20:31:45.548507] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.234 [2024-07-15 20:31:45.548525] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.234 [2024-07-15 20:31:45.548539] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.234 [2024-07-15 20:31:45.548550] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.234 [2024-07-15 20:31:45.548598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.234 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:24.234 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:24.234 20:31:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:24.234 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:24.234 20:31:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.234 20:31:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:24.234 20:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.q2MvlIhZ62 00:15:24.234 20:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.q2MvlIhZ62 00:15:24.234 20:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:24.492 [2024-07-15 20:31:45.918614] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:24.492 20:31:45 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:25.058 20:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:25.318 [2024-07-15 20:31:46.670817] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:25.318 [2024-07-15 20:31:46.671076] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.318 20:31:46 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:25.577 malloc0 00:15:25.577 20:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:26.142 20:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.q2MvlIhZ62 00:15:26.142 [2024-07-15 20:31:47.621944] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:26.400 20:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=84559 00:15:26.400 20:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:26.400 20:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:26.400 20:31:47 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 84559 /var/tmp/bdevperf.sock 00:15:26.400 20:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84559 ']' 00:15:26.400 20:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:26.400 20:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:26.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:26.400 20:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:26.400 20:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:26.400 20:31:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.400 [2024-07-15 20:31:47.694784] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:26.400 [2024-07-15 20:31:47.694898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84559 ] 00:15:26.400 [2024-07-15 20:31:47.828391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.657 [2024-07-15 20:31:47.921771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.589 20:31:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:27.589 20:31:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:27.589 20:31:48 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.q2MvlIhZ62 00:15:27.847 [2024-07-15 20:31:49.266980] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:27.847 [2024-07-15 20:31:49.267125] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:27.847 TLSTESTn1 00:15:28.188 20:31:49 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:28.447 20:31:49 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:15:28.447 "subsystems": [ 00:15:28.447 { 00:15:28.447 "subsystem": "keyring", 00:15:28.447 "config": [] 00:15:28.447 }, 00:15:28.447 { 00:15:28.447 "subsystem": "iobuf", 00:15:28.447 "config": [ 00:15:28.447 { 00:15:28.447 "method": "iobuf_set_options", 00:15:28.447 "params": { 00:15:28.447 "large_bufsize": 135168, 00:15:28.447 "large_pool_count": 1024, 00:15:28.447 "small_bufsize": 8192, 00:15:28.447 "small_pool_count": 8192 00:15:28.447 } 00:15:28.447 } 00:15:28.447 ] 00:15:28.447 }, 00:15:28.447 { 00:15:28.447 "subsystem": "sock", 00:15:28.447 "config": [ 00:15:28.447 { 00:15:28.447 "method": "sock_set_default_impl", 00:15:28.447 "params": { 00:15:28.447 "impl_name": "posix" 00:15:28.447 } 00:15:28.447 }, 00:15:28.447 { 00:15:28.447 "method": "sock_impl_set_options", 00:15:28.447 "params": { 00:15:28.447 "enable_ktls": false, 00:15:28.447 "enable_placement_id": 0, 00:15:28.447 "enable_quickack": false, 00:15:28.447 "enable_recv_pipe": true, 00:15:28.447 "enable_zerocopy_send_client": false, 00:15:28.447 "enable_zerocopy_send_server": true, 00:15:28.447 "impl_name": "ssl", 00:15:28.447 "recv_buf_size": 4096, 00:15:28.447 "send_buf_size": 4096, 00:15:28.447 "tls_version": 0, 00:15:28.447 "zerocopy_threshold": 0 00:15:28.447 } 00:15:28.447 }, 00:15:28.447 { 00:15:28.447 "method": "sock_impl_set_options", 00:15:28.447 "params": { 00:15:28.447 "enable_ktls": false, 00:15:28.447 "enable_placement_id": 0, 00:15:28.447 "enable_quickack": false, 00:15:28.447 "enable_recv_pipe": true, 00:15:28.447 "enable_zerocopy_send_client": false, 00:15:28.447 "enable_zerocopy_send_server": true, 00:15:28.447 "impl_name": "posix", 00:15:28.447 "recv_buf_size": 2097152, 00:15:28.447 "send_buf_size": 2097152, 00:15:28.447 "tls_version": 0, 00:15:28.447 "zerocopy_threshold": 0 00:15:28.447 } 00:15:28.447 } 00:15:28.447 ] 00:15:28.447 }, 00:15:28.447 { 00:15:28.447 "subsystem": "vmd", 00:15:28.447 "config": [] 00:15:28.447 }, 00:15:28.447 { 00:15:28.447 "subsystem": "accel", 00:15:28.447 "config": [ 00:15:28.447 { 00:15:28.447 "method": "accel_set_options", 00:15:28.447 "params": { 00:15:28.447 "buf_count": 2048, 00:15:28.447 "large_cache_size": 16, 00:15:28.447 "sequence_count": 2048, 00:15:28.447 "small_cache_size": 128, 00:15:28.447 "task_count": 2048 00:15:28.447 } 00:15:28.447 } 00:15:28.447 ] 00:15:28.447 }, 00:15:28.447 { 00:15:28.447 "subsystem": "bdev", 00:15:28.447 "config": [ 00:15:28.447 { 00:15:28.447 "method": "bdev_set_options", 00:15:28.447 "params": { 00:15:28.447 "bdev_auto_examine": true, 00:15:28.447 "bdev_io_cache_size": 256, 00:15:28.447 "bdev_io_pool_size": 65535, 00:15:28.447 "iobuf_large_cache_size": 16, 00:15:28.447 "iobuf_small_cache_size": 128 00:15:28.447 } 00:15:28.447 }, 00:15:28.447 { 00:15:28.447 "method": "bdev_raid_set_options", 00:15:28.447 "params": { 00:15:28.447 "process_window_size_kb": 1024 00:15:28.447 } 00:15:28.447 }, 00:15:28.447 { 00:15:28.447 "method": "bdev_iscsi_set_options", 00:15:28.447 "params": { 00:15:28.447 "timeout_sec": 30 00:15:28.447 } 00:15:28.447 }, 00:15:28.447 { 00:15:28.447 "method": "bdev_nvme_set_options", 00:15:28.447 "params": { 00:15:28.447 "action_on_timeout": "none", 00:15:28.447 "allow_accel_sequence": false, 00:15:28.447 "arbitration_burst": 0, 00:15:28.447 "bdev_retry_count": 3, 00:15:28.447 "ctrlr_loss_timeout_sec": 0, 00:15:28.447 "delay_cmd_submit": true, 00:15:28.447 "dhchap_dhgroups": [ 00:15:28.447 "null", 00:15:28.447 "ffdhe2048", 00:15:28.447 "ffdhe3072", 00:15:28.447 "ffdhe4096", 00:15:28.447 "ffdhe6144", 00:15:28.447 "ffdhe8192" 00:15:28.447 ], 00:15:28.447 "dhchap_digests": [ 00:15:28.447 "sha256", 00:15:28.447 "sha384", 00:15:28.447 "sha512" 00:15:28.447 ], 00:15:28.447 "disable_auto_failback": false, 00:15:28.447 "fast_io_fail_timeout_sec": 0, 00:15:28.447 "generate_uuids": false, 00:15:28.447 "high_priority_weight": 0, 00:15:28.448 "io_path_stat": false, 00:15:28.448 "io_queue_requests": 0, 00:15:28.448 "keep_alive_timeout_ms": 10000, 00:15:28.448 "low_priority_weight": 0, 00:15:28.448 "medium_priority_weight": 0, 00:15:28.448 "nvme_adminq_poll_period_us": 10000, 00:15:28.448 "nvme_error_stat": false, 00:15:28.448 "nvme_ioq_poll_period_us": 0, 00:15:28.448 "rdma_cm_event_timeout_ms": 0, 00:15:28.448 "rdma_max_cq_size": 0, 00:15:28.448 "rdma_srq_size": 0, 00:15:28.448 "reconnect_delay_sec": 0, 00:15:28.448 "timeout_admin_us": 0, 00:15:28.448 "timeout_us": 0, 00:15:28.448 "transport_ack_timeout": 0, 00:15:28.448 "transport_retry_count": 4, 00:15:28.448 "transport_tos": 0 00:15:28.448 } 00:15:28.448 }, 00:15:28.448 { 00:15:28.448 "method": "bdev_nvme_set_hotplug", 00:15:28.448 "params": { 00:15:28.448 "enable": false, 00:15:28.448 "period_us": 100000 00:15:28.448 } 00:15:28.448 }, 00:15:28.448 { 00:15:28.448 "method": "bdev_malloc_create", 00:15:28.448 "params": { 00:15:28.448 "block_size": 4096, 00:15:28.448 "name": "malloc0", 00:15:28.448 "num_blocks": 8192, 00:15:28.448 "optimal_io_boundary": 0, 00:15:28.448 "physical_block_size": 4096, 00:15:28.448 "uuid": "facda85f-600e-4880-88d9-d8c946947865" 00:15:28.448 } 00:15:28.448 }, 00:15:28.448 { 00:15:28.448 "method": "bdev_wait_for_examine" 00:15:28.448 } 00:15:28.448 ] 00:15:28.448 }, 00:15:28.448 { 00:15:28.448 "subsystem": "nbd", 00:15:28.448 "config": [] 00:15:28.448 }, 00:15:28.448 { 00:15:28.448 "subsystem": "scheduler", 00:15:28.448 "config": [ 00:15:28.448 { 00:15:28.448 "method": "framework_set_scheduler", 00:15:28.448 "params": { 00:15:28.448 "name": "static" 00:15:28.448 } 00:15:28.448 } 00:15:28.448 ] 00:15:28.448 }, 00:15:28.448 { 00:15:28.448 "subsystem": "nvmf", 00:15:28.448 "config": [ 00:15:28.448 { 00:15:28.448 "method": "nvmf_set_config", 00:15:28.448 "params": { 00:15:28.448 "admin_cmd_passthru": { 00:15:28.448 "identify_ctrlr": false 00:15:28.448 }, 00:15:28.448 "discovery_filter": "match_any" 00:15:28.448 } 00:15:28.448 }, 00:15:28.448 { 00:15:28.448 "method": "nvmf_set_max_subsystems", 00:15:28.448 "params": { 00:15:28.448 "max_subsystems": 1024 00:15:28.448 } 00:15:28.448 }, 00:15:28.448 { 00:15:28.448 "method": "nvmf_set_crdt", 00:15:28.448 "params": { 00:15:28.448 "crdt1": 0, 00:15:28.448 "crdt2": 0, 00:15:28.448 "crdt3": 0 00:15:28.448 } 00:15:28.448 }, 00:15:28.448 { 00:15:28.448 "method": "nvmf_create_transport", 00:15:28.448 "params": { 00:15:28.448 "abort_timeout_sec": 1, 00:15:28.448 "ack_timeout": 0, 00:15:28.448 "buf_cache_size": 4294967295, 00:15:28.448 "c2h_success": false, 00:15:28.448 "data_wr_pool_size": 0, 00:15:28.448 "dif_insert_or_strip": false, 00:15:28.448 "in_capsule_data_size": 4096, 00:15:28.448 "io_unit_size": 131072, 00:15:28.448 "max_aq_depth": 128, 00:15:28.448 "max_io_qpairs_per_ctrlr": 127, 00:15:28.448 "max_io_size": 131072, 00:15:28.448 "max_queue_depth": 128, 00:15:28.448 "num_shared_buffers": 511, 00:15:28.448 "sock_priority": 0, 00:15:28.448 "trtype": "TCP", 00:15:28.448 "zcopy": false 00:15:28.448 } 00:15:28.448 }, 00:15:28.448 { 00:15:28.448 "method": "nvmf_create_subsystem", 00:15:28.448 "params": { 00:15:28.448 "allow_any_host": false, 00:15:28.448 "ana_reporting": false, 00:15:28.448 "max_cntlid": 65519, 00:15:28.448 "max_namespaces": 10, 00:15:28.448 "min_cntlid": 1, 00:15:28.448 "model_number": "SPDK bdev Controller", 00:15:28.448 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.448 "serial_number": "SPDK00000000000001" 00:15:28.448 } 00:15:28.448 }, 00:15:28.448 { 00:15:28.448 "method": "nvmf_subsystem_add_host", 00:15:28.448 "params": { 00:15:28.448 "host": "nqn.2016-06.io.spdk:host1", 00:15:28.448 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.448 "psk": "/tmp/tmp.q2MvlIhZ62" 00:15:28.448 } 00:15:28.448 }, 00:15:28.448 { 00:15:28.448 "method": "nvmf_subsystem_add_ns", 00:15:28.448 "params": { 00:15:28.448 "namespace": { 00:15:28.448 "bdev_name": "malloc0", 00:15:28.448 "nguid": "FACDA85F600E488088D9D8C946947865", 00:15:28.448 "no_auto_visible": false, 00:15:28.448 "nsid": 1, 00:15:28.448 "uuid": "facda85f-600e-4880-88d9-d8c946947865" 00:15:28.448 }, 00:15:28.448 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:28.448 } 00:15:28.448 }, 00:15:28.448 { 00:15:28.448 "method": "nvmf_subsystem_add_listener", 00:15:28.448 "params": { 00:15:28.448 "listen_address": { 00:15:28.448 "adrfam": "IPv4", 00:15:28.448 "traddr": "10.0.0.2", 00:15:28.448 "trsvcid": "4420", 00:15:28.448 "trtype": "TCP" 00:15:28.448 }, 00:15:28.448 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.448 "secure_channel": true 00:15:28.448 } 00:15:28.448 } 00:15:28.448 ] 00:15:28.448 } 00:15:28.448 ] 00:15:28.448 }' 00:15:28.448 20:31:49 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:29.014 20:31:50 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:15:29.014 "subsystems": [ 00:15:29.014 { 00:15:29.014 "subsystem": "keyring", 00:15:29.014 "config": [] 00:15:29.014 }, 00:15:29.014 { 00:15:29.014 "subsystem": "iobuf", 00:15:29.014 "config": [ 00:15:29.014 { 00:15:29.014 "method": "iobuf_set_options", 00:15:29.014 "params": { 00:15:29.014 "large_bufsize": 135168, 00:15:29.014 "large_pool_count": 1024, 00:15:29.014 "small_bufsize": 8192, 00:15:29.014 "small_pool_count": 8192 00:15:29.014 } 00:15:29.014 } 00:15:29.014 ] 00:15:29.014 }, 00:15:29.014 { 00:15:29.014 "subsystem": "sock", 00:15:29.014 "config": [ 00:15:29.014 { 00:15:29.014 "method": "sock_set_default_impl", 00:15:29.014 "params": { 00:15:29.014 "impl_name": "posix" 00:15:29.014 } 00:15:29.014 }, 00:15:29.014 { 00:15:29.014 "method": "sock_impl_set_options", 00:15:29.014 "params": { 00:15:29.014 "enable_ktls": false, 00:15:29.014 "enable_placement_id": 0, 00:15:29.014 "enable_quickack": false, 00:15:29.014 "enable_recv_pipe": true, 00:15:29.014 "enable_zerocopy_send_client": false, 00:15:29.014 "enable_zerocopy_send_server": true, 00:15:29.014 "impl_name": "ssl", 00:15:29.014 "recv_buf_size": 4096, 00:15:29.014 "send_buf_size": 4096, 00:15:29.014 "tls_version": 0, 00:15:29.014 "zerocopy_threshold": 0 00:15:29.014 } 00:15:29.014 }, 00:15:29.014 { 00:15:29.014 "method": "sock_impl_set_options", 00:15:29.014 "params": { 00:15:29.014 "enable_ktls": false, 00:15:29.014 "enable_placement_id": 0, 00:15:29.014 "enable_quickack": false, 00:15:29.014 "enable_recv_pipe": true, 00:15:29.014 "enable_zerocopy_send_client": false, 00:15:29.014 "enable_zerocopy_send_server": true, 00:15:29.014 "impl_name": "posix", 00:15:29.014 "recv_buf_size": 2097152, 00:15:29.014 "send_buf_size": 2097152, 00:15:29.014 "tls_version": 0, 00:15:29.014 "zerocopy_threshold": 0 00:15:29.014 } 00:15:29.014 } 00:15:29.014 ] 00:15:29.014 }, 00:15:29.014 { 00:15:29.014 "subsystem": "vmd", 00:15:29.014 "config": [] 00:15:29.014 }, 00:15:29.014 { 00:15:29.014 "subsystem": "accel", 00:15:29.014 "config": [ 00:15:29.014 { 00:15:29.014 "method": "accel_set_options", 00:15:29.014 "params": { 00:15:29.014 "buf_count": 2048, 00:15:29.014 "large_cache_size": 16, 00:15:29.014 "sequence_count": 2048, 00:15:29.014 "small_cache_size": 128, 00:15:29.014 "task_count": 2048 00:15:29.014 } 00:15:29.014 } 00:15:29.014 ] 00:15:29.014 }, 00:15:29.014 { 00:15:29.014 "subsystem": "bdev", 00:15:29.014 "config": [ 00:15:29.014 { 00:15:29.014 "method": "bdev_set_options", 00:15:29.014 "params": { 00:15:29.014 "bdev_auto_examine": true, 00:15:29.014 "bdev_io_cache_size": 256, 00:15:29.014 "bdev_io_pool_size": 65535, 00:15:29.014 "iobuf_large_cache_size": 16, 00:15:29.014 "iobuf_small_cache_size": 128 00:15:29.014 } 00:15:29.014 }, 00:15:29.014 { 00:15:29.014 "method": "bdev_raid_set_options", 00:15:29.014 "params": { 00:15:29.014 "process_window_size_kb": 1024 00:15:29.014 } 00:15:29.014 }, 00:15:29.014 { 00:15:29.014 "method": "bdev_iscsi_set_options", 00:15:29.014 "params": { 00:15:29.014 "timeout_sec": 30 00:15:29.014 } 00:15:29.014 }, 00:15:29.014 { 00:15:29.014 "method": "bdev_nvme_set_options", 00:15:29.014 "params": { 00:15:29.014 "action_on_timeout": "none", 00:15:29.014 "allow_accel_sequence": false, 00:15:29.014 "arbitration_burst": 0, 00:15:29.014 "bdev_retry_count": 3, 00:15:29.014 "ctrlr_loss_timeout_sec": 0, 00:15:29.014 "delay_cmd_submit": true, 00:15:29.014 "dhchap_dhgroups": [ 00:15:29.014 "null", 00:15:29.014 "ffdhe2048", 00:15:29.014 "ffdhe3072", 00:15:29.014 "ffdhe4096", 00:15:29.014 "ffdhe6144", 00:15:29.014 "ffdhe8192" 00:15:29.014 ], 00:15:29.014 "dhchap_digests": [ 00:15:29.014 "sha256", 00:15:29.014 "sha384", 00:15:29.014 "sha512" 00:15:29.014 ], 00:15:29.014 "disable_auto_failback": false, 00:15:29.014 "fast_io_fail_timeout_sec": 0, 00:15:29.014 "generate_uuids": false, 00:15:29.014 "high_priority_weight": 0, 00:15:29.014 "io_path_stat": false, 00:15:29.014 "io_queue_requests": 512, 00:15:29.014 "keep_alive_timeout_ms": 10000, 00:15:29.014 "low_priority_weight": 0, 00:15:29.014 "medium_priority_weight": 0, 00:15:29.014 "nvme_adminq_poll_period_us": 10000, 00:15:29.014 "nvme_error_stat": false, 00:15:29.014 "nvme_ioq_poll_period_us": 0, 00:15:29.014 "rdma_cm_event_timeout_ms": 0, 00:15:29.014 "rdma_max_cq_size": 0, 00:15:29.014 "rdma_srq_size": 0, 00:15:29.014 "reconnect_delay_sec": 0, 00:15:29.014 "timeout_admin_us": 0, 00:15:29.014 "timeout_us": 0, 00:15:29.014 "transport_ack_timeout": 0, 00:15:29.014 "transport_retry_count": 4, 00:15:29.014 "transport_tos": 0 00:15:29.014 } 00:15:29.014 }, 00:15:29.014 { 00:15:29.014 "method": "bdev_nvme_attach_controller", 00:15:29.014 "params": { 00:15:29.014 "adrfam": "IPv4", 00:15:29.014 "ctrlr_loss_timeout_sec": 0, 00:15:29.014 "ddgst": false, 00:15:29.014 "fast_io_fail_timeout_sec": 0, 00:15:29.014 "hdgst": false, 00:15:29.014 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:29.014 "name": "TLSTEST", 00:15:29.014 "prchk_guard": false, 00:15:29.014 "prchk_reftag": false, 00:15:29.014 "psk": "/tmp/tmp.q2MvlIhZ62", 00:15:29.014 "reconnect_delay_sec": 0, 00:15:29.014 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.014 "traddr": "10.0.0.2", 00:15:29.014 "trsvcid": "4420", 00:15:29.014 "trtype": "TCP" 00:15:29.014 } 00:15:29.014 }, 00:15:29.014 { 00:15:29.014 "method": "bdev_nvme_set_hotplug", 00:15:29.014 "params": { 00:15:29.014 "enable": false, 00:15:29.014 "period_us": 100000 00:15:29.014 } 00:15:29.014 }, 00:15:29.014 { 00:15:29.014 "method": "bdev_wait_for_examine" 00:15:29.014 } 00:15:29.014 ] 00:15:29.014 }, 00:15:29.015 { 00:15:29.015 "subsystem": "nbd", 00:15:29.015 "config": [] 00:15:29.015 } 00:15:29.015 ] 00:15:29.015 }' 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 84559 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84559 ']' 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84559 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84559 00:15:29.015 killing process with pid 84559 00:15:29.015 Received shutdown signal, test time was about 10.000000 seconds 00:15:29.015 00:15:29.015 Latency(us) 00:15:29.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.015 =================================================================================================================== 00:15:29.015 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84559' 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84559 00:15:29.015 [2024-07-15 20:31:50.233229] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84559 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 84470 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84470 ']' 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84470 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84470 00:15:29.015 killing process with pid 84470 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84470' 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84470 00:15:29.015 [2024-07-15 20:31:50.481423] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:29.015 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84470 00:15:29.273 20:31:50 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:29.273 20:31:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:29.273 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:29.273 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.273 20:31:50 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:15:29.273 "subsystems": [ 00:15:29.273 { 00:15:29.273 "subsystem": "keyring", 00:15:29.273 "config": [] 00:15:29.273 }, 00:15:29.273 { 00:15:29.273 "subsystem": "iobuf", 00:15:29.273 "config": [ 00:15:29.273 { 00:15:29.273 "method": "iobuf_set_options", 00:15:29.273 "params": { 00:15:29.273 "large_bufsize": 135168, 00:15:29.273 "large_pool_count": 1024, 00:15:29.273 "small_bufsize": 8192, 00:15:29.273 "small_pool_count": 8192 00:15:29.273 } 00:15:29.273 } 00:15:29.273 ] 00:15:29.273 }, 00:15:29.273 { 00:15:29.273 "subsystem": "sock", 00:15:29.273 "config": [ 00:15:29.273 { 00:15:29.273 "method": "sock_set_default_impl", 00:15:29.273 "params": { 00:15:29.273 "impl_name": "posix" 00:15:29.273 } 00:15:29.273 }, 00:15:29.273 { 00:15:29.273 "method": "sock_impl_set_options", 00:15:29.273 "params": { 00:15:29.273 "enable_ktls": false, 00:15:29.273 "enable_placement_id": 0, 00:15:29.273 "enable_quickack": false, 00:15:29.273 "enable_recv_pipe": true, 00:15:29.273 "enable_zerocopy_send_client": false, 00:15:29.273 "enable_zerocopy_send_server": true, 00:15:29.273 "impl_name": "ssl", 00:15:29.273 "recv_buf_size": 4096, 00:15:29.273 "send_buf_size": 4096, 00:15:29.273 "tls_version": 0, 00:15:29.273 "zerocopy_threshold": 0 00:15:29.273 } 00:15:29.273 }, 00:15:29.273 { 00:15:29.273 "method": "sock_impl_set_options", 00:15:29.273 "params": { 00:15:29.273 "enable_ktls": false, 00:15:29.273 "enable_placement_id": 0, 00:15:29.273 "enable_quickack": false, 00:15:29.273 "enable_recv_pipe": true, 00:15:29.273 "enable_zerocopy_send_client": false, 00:15:29.273 "enable_zerocopy_send_server": true, 00:15:29.273 "impl_name": "posix", 00:15:29.273 "recv_buf_size": 2097152, 00:15:29.273 "send_buf_size": 2097152, 00:15:29.273 "tls_version": 0, 00:15:29.273 "zerocopy_threshold": 0 00:15:29.273 } 00:15:29.273 } 00:15:29.274 ] 00:15:29.274 }, 00:15:29.274 { 00:15:29.274 "subsystem": "vmd", 00:15:29.274 "config": [] 00:15:29.274 }, 00:15:29.274 { 00:15:29.274 "subsystem": "accel", 00:15:29.274 "config": [ 00:15:29.274 { 00:15:29.274 "method": "accel_set_options", 00:15:29.274 "params": { 00:15:29.274 "buf_count": 2048, 00:15:29.274 "large_cache_size": 16, 00:15:29.274 "sequence_count": 2048, 00:15:29.274 "small_cache_size": 128, 00:15:29.274 "task_count": 2048 00:15:29.274 } 00:15:29.274 } 00:15:29.274 ] 00:15:29.274 }, 00:15:29.274 { 00:15:29.274 "subsystem": "bdev", 00:15:29.274 "config": [ 00:15:29.274 { 00:15:29.274 "method": "bdev_set_options", 00:15:29.274 "params": { 00:15:29.274 "bdev_auto_examine": true, 00:15:29.274 "bdev_io_cache_size": 256, 00:15:29.274 "bdev_io_pool_size": 65535, 00:15:29.274 "iobuf_large_cache_size": 16, 00:15:29.274 "iobuf_small_cache_size": 128 00:15:29.274 } 00:15:29.274 }, 00:15:29.274 { 00:15:29.274 "method": "bdev_raid_set_options", 00:15:29.274 "params": { 00:15:29.274 "process_window_size_kb": 1024 00:15:29.274 } 00:15:29.274 }, 00:15:29.274 { 00:15:29.274 "method": "bdev_iscsi_set_options", 00:15:29.274 "params": { 00:15:29.274 "timeout_sec": 30 00:15:29.274 } 00:15:29.274 }, 00:15:29.274 { 00:15:29.274 "method": "bdev_nvme_set_options", 00:15:29.274 "params": { 00:15:29.274 "action_on_timeout": "none", 00:15:29.274 "allow_accel_sequence": false, 00:15:29.274 "arbitration_burst": 0, 00:15:29.274 "bdev_retry_count": 3, 00:15:29.274 "ctrlr_loss_timeout_sec": 0, 00:15:29.274 "delay_cmd_submit": true, 00:15:29.274 "dhchap_dhgroups": [ 00:15:29.274 "null", 00:15:29.274 "ffdhe2048", 00:15:29.274 "ffdhe3072", 00:15:29.274 "ffdhe4096", 00:15:29.274 "ffdhe6144", 00:15:29.274 "ffdhe8192" 00:15:29.274 ], 00:15:29.274 "dhchap_digests": [ 00:15:29.274 "sha256", 00:15:29.274 "sha384", 00:15:29.274 "sha512" 00:15:29.274 ], 00:15:29.274 "disable_auto_failback": false, 00:15:29.274 "fast_io_fail_timeout_sec": 0, 00:15:29.274 "generate_uuids": false, 00:15:29.274 "high_priority_weight": 0, 00:15:29.274 "io_path_stat": false, 00:15:29.274 "io_queue_requests": 0, 00:15:29.274 "keep_alive_timeout_ms": 10000, 00:15:29.274 "low_priority_weight": 0, 00:15:29.274 "medium_priority_weight": 0, 00:15:29.274 "nvme_adminq_poll_period_us": 10000, 00:15:29.274 "nvme_error_stat": false, 00:15:29.274 "nvme_ioq_poll_period_us": 0, 00:15:29.274 "rdma_cm_event_timeout_ms": 0, 00:15:29.274 "rdma_max_cq_size": 0, 00:15:29.274 "rdma_srq_size": 0, 00:15:29.274 "reconnect_delay_sec": 0, 00:15:29.274 "timeout_admin_us": 0, 00:15:29.274 "timeout_us": 0, 00:15:29.274 "transport_ack_timeout": 0, 00:15:29.274 "transport_retry_count": 4, 00:15:29.274 "transport_tos": 0 00:15:29.274 } 00:15:29.274 }, 00:15:29.274 { 00:15:29.274 "method": "bdev_nvme_set_hotplug", 00:15:29.274 "params": { 00:15:29.274 "enable": false, 00:15:29.274 "period_us": 100000 00:15:29.274 } 00:15:29.274 }, 00:15:29.274 { 00:15:29.274 "method": "bdev_malloc_create", 00:15:29.274 "params": { 00:15:29.274 "block_size": 4096, 00:15:29.274 "name": "malloc0", 00:15:29.274 "num_blocks": 8192, 00:15:29.274 "optimal_io_boundary": 0, 00:15:29.274 "physical_block_size": 4096, 00:15:29.274 "uuid": "facda85f-600e-4880-88d9-d8c946947865" 00:15:29.274 } 00:15:29.274 }, 00:15:29.274 { 00:15:29.274 "method": "bdev_wait_for_examine" 00:15:29.274 } 00:15:29.274 ] 00:15:29.274 }, 00:15:29.274 { 00:15:29.274 "subsystem": "nbd", 00:15:29.274 "config": [] 00:15:29.274 }, 00:15:29.274 { 00:15:29.274 "subsystem": "scheduler", 00:15:29.274 "config": [ 00:15:29.274 { 00:15:29.274 "method": "framework_set_scheduler", 00:15:29.274 "params": { 00:15:29.274 "name": "static" 00:15:29.274 } 00:15:29.274 } 00:15:29.274 ] 00:15:29.274 }, 00:15:29.274 { 00:15:29.274 "subsystem": "nvmf", 00:15:29.274 "config": [ 00:15:29.274 { 00:15:29.274 "method": "nvmf_set_config", 00:15:29.274 "params": { 00:15:29.274 "admin_cmd_passthru": { 00:15:29.274 "identify_ctrlr": false 00:15:29.274 }, 00:15:29.274 "discovery_filter": "match_any" 00:15:29.274 } 00:15:29.274 }, 00:15:29.274 { 00:15:29.274 "method": "nvmf_set_max_subsystems", 00:15:29.274 "params": { 00:15:29.274 "max_subsystems": 1024 00:15:29.274 } 00:15:29.274 }, 00:15:29.274 { 00:15:29.274 "method": "nvmf_set_crdt", 00:15:29.274 "params": { 00:15:29.274 "crdt1": 0, 00:15:29.274 "crdt2": 0, 00:15:29.274 "crdt3": 0 00:15:29.274 } 00:15:29.274 }, 00:15:29.274 { 00:15:29.274 "method": "nvmf_create_transport", 00:15:29.274 "params": { 00:15:29.274 "abort_timeout_sec": 1, 00:15:29.274 "ack_timeout": 0, 00:15:29.274 "buf_cache_size": 4294967295, 00:15:29.274 "c2h_success": false, 00:15:29.274 "data_wr_pool_size": 0, 00:15:29.274 "dif_insert_or_strip": false, 00:15:29.274 "in_capsule_data_size": 4096, 00:15:29.274 "io_unit_size": 131072, 00:15:29.274 "max_aq_depth": 128, 00:15:29.274 "max_io_qpairs_per_ctrlr": 127, 00:15:29.274 "max_io_size": 131072, 00:15:29.274 "max_queue_depth": 128, 00:15:29.274 "num_shared_buffers": 511, 00:15:29.274 "sock_priority": 0, 00:15:29.274 "trtype": "TCP", 00:15:29.274 "zcopy": false 00:15:29.274 } 00:15:29.274 }, 00:15:29.274 { 00:15:29.274 "method": "nvmf_create_subsystem", 00:15:29.274 "params": { 00:15:29.274 "allow_any_host": false, 00:15:29.274 "ana_reporting": false, 00:15:29.274 "max_cntlid": 65519, 00:15:29.274 "max_namespaces": 10, 00:15:29.274 "min_cntlid": 1, 00:15:29.274 "model_number": "SPDK bdev Controller", 00:15:29.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.274 "serial_number": "SPDK00000000000001" 00:15:29.274 } 00:15:29.274 }, 00:15:29.274 { 00:15:29.274 "method": "nvmf_subsystem_add_host", 00:15:29.274 "params": { 00:15:29.274 "host": "nqn.2016-06.io.spdk:host1", 00:15:29.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.274 "psk": "/tmp/tmp.q2MvlIhZ62" 00:15:29.274 } 00:15:29.274 }, 00:15:29.274 { 00:15:29.274 "method": "nvmf_subsystem_add_ns", 00:15:29.274 "params": { 00:15:29.274 "namespace": { 00:15:29.274 "bdev_name": "malloc0", 00:15:29.274 "nguid": "FACDA85F600E488088D9D8C946947865", 00:15:29.274 "no_auto_visible": false, 00:15:29.274 "nsid": 1, 00:15:29.274 "uuid": "facda85f-600e-4880-88d9-d8c946947865" 00:15:29.274 }, 00:15:29.274 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:29.274 } 00:15:29.274 }, 00:15:29.274 { 00:15:29.274 "method": "nvmf_subsystem_add_listener", 00:15:29.274 "params": { 00:15:29.274 "listen_address": { 00:15:29.274 "adrfam": "IPv4", 00:15:29.274 "traddr": "10.0.0.2", 00:15:29.274 "trsvcid": "4420", 00:15:29.274 "trtype": "TCP" 00:15:29.274 }, 00:15:29.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.274 "secure_channel": true 00:15:29.274 } 00:15:29.274 } 00:15:29.274 ] 00:15:29.274 } 00:15:29.274 ] 00:15:29.274 }' 00:15:29.274 20:31:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84643 00:15:29.274 20:31:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:29.274 20:31:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84643 00:15:29.274 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84643 ']' 00:15:29.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.274 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.275 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.275 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.275 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.275 20:31:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.275 [2024-07-15 20:31:50.714968] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:29.275 [2024-07-15 20:31:50.715077] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.532 [2024-07-15 20:31:50.848549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.532 [2024-07-15 20:31:50.918453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.533 [2024-07-15 20:31:50.918513] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.533 [2024-07-15 20:31:50.918525] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.533 [2024-07-15 20:31:50.918533] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.533 [2024-07-15 20:31:50.918540] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.533 [2024-07-15 20:31:50.918646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.790 [2024-07-15 20:31:51.111248] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:29.790 [2024-07-15 20:31:51.127212] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:29.790 [2024-07-15 20:31:51.143174] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:29.790 [2024-07-15 20:31:51.143416] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:30.356 20:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.356 20:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:30.356 20:31:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:30.356 20:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:30.356 20:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.356 20:31:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:30.356 20:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=84687 00:15:30.356 20:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 84687 /var/tmp/bdevperf.sock 00:15:30.356 20:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84687 ']' 00:15:30.356 20:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:30.356 20:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.356 20:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:30.356 20:31:51 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:15:30.356 "subsystems": [ 00:15:30.356 { 00:15:30.356 "subsystem": "keyring", 00:15:30.356 "config": [] 00:15:30.356 }, 00:15:30.356 { 00:15:30.356 "subsystem": "iobuf", 00:15:30.356 "config": [ 00:15:30.356 { 00:15:30.356 "method": "iobuf_set_options", 00:15:30.356 "params": { 00:15:30.356 "large_bufsize": 135168, 00:15:30.356 "large_pool_count": 1024, 00:15:30.356 "small_bufsize": 8192, 00:15:30.356 "small_pool_count": 8192 00:15:30.356 } 00:15:30.356 } 00:15:30.356 ] 00:15:30.356 }, 00:15:30.357 { 00:15:30.357 "subsystem": "sock", 00:15:30.357 "config": [ 00:15:30.357 { 00:15:30.357 "method": "sock_set_default_impl", 00:15:30.357 "params": { 00:15:30.357 "impl_name": "posix" 00:15:30.357 } 00:15:30.357 }, 00:15:30.357 { 00:15:30.357 "method": "sock_impl_set_options", 00:15:30.357 "params": { 00:15:30.357 "enable_ktls": false, 00:15:30.357 "enable_placement_id": 0, 00:15:30.357 "enable_quickack": false, 00:15:30.357 "enable_recv_pipe": true, 00:15:30.357 "enable_zerocopy_send_client": false, 00:15:30.357 "enable_zerocopy_send_server": true, 00:15:30.357 "impl_name": "ssl", 00:15:30.357 "recv_buf_size": 4096, 00:15:30.357 "send_buf_size": 4096, 00:15:30.357 "tls_version": 0, 00:15:30.357 "zerocopy_threshold": 0 00:15:30.357 } 00:15:30.357 }, 00:15:30.357 { 00:15:30.357 "method": "sock_impl_set_options", 00:15:30.357 "params": { 00:15:30.357 "enable_ktls": false, 00:15:30.357 "enable_placement_id": 0, 00:15:30.357 "enable_quickack": false, 00:15:30.357 "enable_recv_pipe": true, 00:15:30.357 "enable_zerocopy_send_client": false, 00:15:30.357 "enable_zerocopy_send_server": true, 00:15:30.357 "impl_name": "posix", 00:15:30.357 "recv_buf_size": 2097152, 00:15:30.357 "send_buf_size": 2097152, 00:15:30.357 "tls_version": 0, 00:15:30.357 "zerocopy_threshold": 0 00:15:30.357 } 00:15:30.357 } 00:15:30.357 ] 00:15:30.357 }, 00:15:30.357 { 00:15:30.357 "subsystem": "vmd", 00:15:30.357 "config": [] 00:15:30.357 }, 00:15:30.357 { 00:15:30.357 "subsystem": "accel", 00:15:30.357 "config": [ 00:15:30.357 { 00:15:30.357 "method": "accel_set_options", 00:15:30.357 "params": { 00:15:30.357 "buf_count": 2048, 00:15:30.357 "large_cache_size": 16, 00:15:30.357 "sequence_count": 2048, 00:15:30.357 "small_cache_size": 128, 00:15:30.357 "task_count": 2048 00:15:30.357 } 00:15:30.357 } 00:15:30.357 ] 00:15:30.357 }, 00:15:30.357 { 00:15:30.357 "subsystem": "bdev", 00:15:30.357 "config": [ 00:15:30.357 { 00:15:30.357 "method": "bdev_set_options", 00:15:30.357 "params": { 00:15:30.357 "bdev_auto_examine": true, 00:15:30.357 "bdev_io_cache_size": 256, 00:15:30.357 "bdev_io_pool_size": 65535, 00:15:30.357 "iobuf_large_cache_size": 16, 00:15:30.357 "iobuf_small_cache_size": 128 00:15:30.357 } 00:15:30.357 }, 00:15:30.357 { 00:15:30.357 "method": "bdev_raid_set_options", 00:15:30.357 "params": { 00:15:30.357 "process_window_size_kb": 1024 00:15:30.357 } 00:15:30.357 }, 00:15:30.357 { 00:15:30.357 "method": "bdev_iscsi_set_options", 00:15:30.357 "params": { 00:15:30.357 "timeout_sec": 30 00:15:30.357 } 00:15:30.357 }, 00:15:30.357 { 00:15:30.357 "method": "bdev_nvme_set_options", 00:15:30.357 "params": { 00:15:30.357 "action_on_timeout": "none", 00:15:30.357 "allow_accel_sequence": false, 00:15:30.357 "arbitration_burst": 0, 00:15:30.357 "bdev_retry_count": 3, 00:15:30.357 "ctrlr_loss_timeout_sec": 0, 00:15:30.357 "delay_cmd_submit": true, 00:15:30.357 "dhchap_dhgroups": [ 00:15:30.357 "null", 00:15:30.357 "ffdhe2048", 00:15:30.357 "ffdhe3072", 00:15:30.357 "ffdhe4096", 00:15:30.357 "ffdhe6144", 00:15:30.357 "ffdhe8192" 00:15:30.357 ], 00:15:30.357 "dhchap_digests": [ 00:15:30.357 "sha256", 00:15:30.357 "sha384", 00:15:30.357 "sha512" 00:15:30.357 ], 00:15:30.357 "disable_auto_failback": false, 00:15:30.357 "fast_io_fail_timeout_sec": 0, 00:15:30.357 "generate_uuids": false, 00:15:30.357 "high_priority_weight": 0, 00:15:30.357 "io_path_stat": false, 00:15:30.357 "io_queue_requests": 512, 00:15:30.357 "keep_alive_timeout_ms": 10000, 00:15:30.357 "low_priority_weight": 0, 00:15:30.357 "medium_priority_weight": 0, 00:15:30.357 "nvme_adminq_poll_period_us": 10000, 00:15:30.357 "nvme_error_stat": false, 00:15:30.357 "nvme_ioq_poll_period_us": 0, 00:15:30.357 "rdma_cm_event_timeout_ms": 0, 00:15:30.357 "rdma_max_cq_size": 0, 00:15:30.357 "rdma_srq_size": 0, 00:15:30.357 "reconnect_delay_sec": 0, 00:15:30.357 "timeout_admin_us": 0, 00:15:30.357 "timeout_us": 0, 00:15:30.357 "transport_ack_timeout": 0, 00:15:30.357 "transport_retry_count": 4, 00:15:30.357 "transport_tos": 0 00:15:30.357 } 00:15:30.357 }, 00:15:30.357 { 00:15:30.357 "method": "bdev_nvme_attach_controller", 00:15:30.357 "params": { 00:15:30.357 "adrfam": "IPv4", 00:15:30.357 "ctrlr_loss_timeout_sec": 0, 00:15:30.357 "ddgst": false, 00:15:30.357 "fast_io_fail_timeout_sec": 0, 00:15:30.357 "hdgst": false, 00:15:30.357 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:30.358 "name": "TLSTEST", 00:15:30.358 "prchk_guard": false, 00:15:30.358 "prchk_reftag": false, 00:15:30.358 "psk": "/tmp/tmp.q2MvlIhZ62", 00:15:30.358 "reconnect_delay_sec": 0, 00:15:30.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.358 "traddr": "10.0.0.2", 00:15:30.358 "trsvcid": "4420", 00:15:30.358 "trtype": "TCP" 00:15:30.358 } 00:15:30.358 }, 00:15:30.358 { 00:15:30.358 "method": "bdev_nvme_set_hotplug", 00:15:30.358 "params": { 00:15:30.358 "enable": false, 00:15:30.358 "period_us": 100000 00:15:30.358 } 00:15:30.358 }, 00:15:30.358 { 00:15:30.358 "method": "bdev_wait_for_examine" 00:15:30.358 } 00:15:30.358 ] 00:15:30.358 }, 00:15:30.358 { 00:15:30.358 "subsystem": "nbd", 00:15:30.358 "config": [] 00:15:30.358 } 00:15:30.358 ] 00:15:30.358 }' 00:15:30.358 20:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:30.358 20:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.358 20:31:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.358 [2024-07-15 20:31:51.793687] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:30.358 [2024-07-15 20:31:51.793812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84687 ] 00:15:30.617 [2024-07-15 20:31:51.934619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.617 [2024-07-15 20:31:51.993998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.876 [2024-07-15 20:31:52.121003] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:30.876 [2024-07-15 20:31:52.121168] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:31.443 20:31:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.443 20:31:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:31.443 20:31:52 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:31.443 Running I/O for 10 seconds... 00:15:41.415 00:15:41.415 Latency(us) 00:15:41.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.415 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:41.415 Verification LBA range: start 0x0 length 0x2000 00:15:41.415 TLSTESTn1 : 10.01 3533.70 13.80 0.00 0.00 36170.32 3902.37 43611.23 00:15:41.415 =================================================================================================================== 00:15:41.415 Total : 3533.70 13.80 0.00 0.00 36170.32 3902.37 43611.23 00:15:41.415 0 00:15:41.673 20:32:02 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:41.673 20:32:02 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 84687 00:15:41.673 20:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84687 ']' 00:15:41.673 20:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84687 00:15:41.673 20:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:41.673 20:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:41.673 20:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84687 00:15:41.673 20:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:41.673 20:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:41.673 killing process with pid 84687 00:15:41.673 20:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84687' 00:15:41.673 20:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84687 00:15:41.673 20:32:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84687 00:15:41.673 Received shutdown signal, test time was about 10.000000 seconds 00:15:41.673 00:15:41.673 Latency(us) 00:15:41.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.673 =================================================================================================================== 00:15:41.673 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:41.673 [2024-07-15 20:32:02.954449] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:41.673 20:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 84643 00:15:41.673 20:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84643 ']' 00:15:41.673 20:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84643 00:15:41.673 20:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:41.673 20:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:41.673 20:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84643 00:15:41.673 20:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:41.673 20:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:41.673 killing process with pid 84643 00:15:41.673 20:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84643' 00:15:41.673 20:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84643 00:15:41.673 [2024-07-15 20:32:03.164517] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:41.673 20:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84643 00:15:41.930 20:32:03 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:15:41.930 20:32:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:41.930 20:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:41.930 20:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:41.930 20:32:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84829 00:15:41.930 20:32:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84829 00:15:41.930 20:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84829 ']' 00:15:41.930 20:32:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:41.930 20:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.930 20:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.930 20:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.930 20:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.930 20:32:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:41.930 [2024-07-15 20:32:03.422333] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:41.930 [2024-07-15 20:32:03.422464] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.186 [2024-07-15 20:32:03.563703] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.186 [2024-07-15 20:32:03.622934] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.186 [2024-07-15 20:32:03.622993] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.186 [2024-07-15 20:32:03.623003] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.186 [2024-07-15 20:32:03.623014] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.186 [2024-07-15 20:32:03.623021] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.186 [2024-07-15 20:32:03.623056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.118 20:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:43.118 20:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:43.118 20:32:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:43.118 20:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:43.118 20:32:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:43.118 20:32:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.118 20:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.q2MvlIhZ62 00:15:43.118 20:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.q2MvlIhZ62 00:15:43.118 20:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:43.375 [2024-07-15 20:32:04.675866] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.375 20:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:43.633 20:32:04 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:43.953 [2024-07-15 20:32:05.212015] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:43.953 [2024-07-15 20:32:05.212265] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.953 20:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:44.211 malloc0 00:15:44.211 20:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:44.469 20:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.q2MvlIhZ62 00:15:44.469 [2024-07-15 20:32:05.954732] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:44.726 20:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:44.726 20:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=84932 00:15:44.727 20:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:44.727 20:32:05 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 84932 /var/tmp/bdevperf.sock 00:15:44.727 20:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84932 ']' 00:15:44.727 20:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:44.727 20:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:44.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:44.727 20:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:44.727 20:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:44.727 20:32:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:44.727 [2024-07-15 20:32:06.019857] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:44.727 [2024-07-15 20:32:06.019964] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84932 ] 00:15:44.727 [2024-07-15 20:32:06.154847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.727 [2024-07-15 20:32:06.223320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.984 20:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:44.984 20:32:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:44.984 20:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.q2MvlIhZ62 00:15:45.242 20:32:06 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:45.500 [2024-07-15 20:32:06.923066] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:45.500 nvme0n1 00:15:45.758 20:32:07 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:45.758 Running I/O for 1 seconds... 00:15:46.696 00:15:46.696 Latency(us) 00:15:46.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.696 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:46.696 Verification LBA range: start 0x0 length 0x2000 00:15:46.696 nvme0n1 : 1.03 3735.41 14.59 0.00 0.00 33875.11 7238.75 20494.89 00:15:46.696 =================================================================================================================== 00:15:46.696 Total : 3735.41 14.59 0.00 0.00 33875.11 7238.75 20494.89 00:15:46.696 0 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 84932 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84932 ']' 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84932 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84932 00:15:46.954 killing process with pid 84932 00:15:46.954 Received shutdown signal, test time was about 1.000000 seconds 00:15:46.954 00:15:46.954 Latency(us) 00:15:46.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.954 =================================================================================================================== 00:15:46.954 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84932' 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84932 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84932 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 84829 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84829 ']' 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84829 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84829 00:15:46.954 killing process with pid 84829 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84829' 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84829 00:15:46.954 [2024-07-15 20:32:08.415606] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:46.954 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84829 00:15:47.211 20:32:08 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:15:47.211 20:32:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:47.211 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:47.211 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:47.211 20:32:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84998 00:15:47.211 20:32:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:47.211 20:32:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84998 00:15:47.211 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 84998 ']' 00:15:47.211 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.211 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.211 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.211 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.211 20:32:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:47.211 [2024-07-15 20:32:08.659784] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:47.211 [2024-07-15 20:32:08.659932] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.468 [2024-07-15 20:32:08.808105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.468 [2024-07-15 20:32:08.887829] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.468 [2024-07-15 20:32:08.887903] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.468 [2024-07-15 20:32:08.887916] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.468 [2024-07-15 20:32:08.887925] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.468 [2024-07-15 20:32:08.887932] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.468 [2024-07-15 20:32:08.887963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.399 20:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:48.400 20:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:48.400 20:32:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:48.400 20:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:48.400 20:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:48.400 20:32:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.400 20:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:15:48.400 20:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.400 20:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:48.400 [2024-07-15 20:32:09.751051] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:48.400 malloc0 00:15:48.400 [2024-07-15 20:32:09.779133] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:48.400 [2024-07-15 20:32:09.779392] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:48.400 20:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.400 20:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=85048 00:15:48.400 20:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:48.400 20:32:09 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 85048 /var/tmp/bdevperf.sock 00:15:48.400 20:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85048 ']' 00:15:48.400 20:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:48.400 20:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:48.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:48.400 20:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:48.400 20:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:48.400 20:32:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:48.400 [2024-07-15 20:32:09.875669] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:48.400 [2024-07-15 20:32:09.875799] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85048 ] 00:15:48.657 [2024-07-15 20:32:10.015905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.657 [2024-07-15 20:32:10.100610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.600 20:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:49.600 20:32:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:49.600 20:32:10 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.q2MvlIhZ62 00:15:49.885 20:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:50.143 [2024-07-15 20:32:11.527769] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:50.143 nvme0n1 00:15:50.143 20:32:11 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:50.400 Running I/O for 1 seconds... 00:15:51.335 00:15:51.335 Latency(us) 00:15:51.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.335 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:51.335 Verification LBA range: start 0x0 length 0x2000 00:15:51.335 nvme0n1 : 1.03 3514.54 13.73 0.00 0.00 35976.23 8638.84 39083.29 00:15:51.335 =================================================================================================================== 00:15:51.335 Total : 3514.54 13.73 0.00 0.00 35976.23 8638.84 39083.29 00:15:51.335 0 00:15:51.335 20:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:15:51.335 20:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.335 20:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:51.593 20:32:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.593 20:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:15:51.593 "subsystems": [ 00:15:51.593 { 00:15:51.593 "subsystem": "keyring", 00:15:51.593 "config": [ 00:15:51.593 { 00:15:51.593 "method": "keyring_file_add_key", 00:15:51.593 "params": { 00:15:51.593 "name": "key0", 00:15:51.593 "path": "/tmp/tmp.q2MvlIhZ62" 00:15:51.593 } 00:15:51.593 } 00:15:51.593 ] 00:15:51.593 }, 00:15:51.593 { 00:15:51.593 "subsystem": "iobuf", 00:15:51.593 "config": [ 00:15:51.593 { 00:15:51.593 "method": "iobuf_set_options", 00:15:51.593 "params": { 00:15:51.593 "large_bufsize": 135168, 00:15:51.593 "large_pool_count": 1024, 00:15:51.593 "small_bufsize": 8192, 00:15:51.593 "small_pool_count": 8192 00:15:51.593 } 00:15:51.593 } 00:15:51.593 ] 00:15:51.593 }, 00:15:51.593 { 00:15:51.593 "subsystem": "sock", 00:15:51.593 "config": [ 00:15:51.593 { 00:15:51.593 "method": "sock_set_default_impl", 00:15:51.593 "params": { 00:15:51.593 "impl_name": "posix" 00:15:51.593 } 00:15:51.593 }, 00:15:51.593 { 00:15:51.593 "method": "sock_impl_set_options", 00:15:51.593 "params": { 00:15:51.593 "enable_ktls": false, 00:15:51.593 "enable_placement_id": 0, 00:15:51.593 "enable_quickack": false, 00:15:51.593 "enable_recv_pipe": true, 00:15:51.593 "enable_zerocopy_send_client": false, 00:15:51.593 "enable_zerocopy_send_server": true, 00:15:51.593 "impl_name": "ssl", 00:15:51.593 "recv_buf_size": 4096, 00:15:51.593 "send_buf_size": 4096, 00:15:51.593 "tls_version": 0, 00:15:51.593 "zerocopy_threshold": 0 00:15:51.593 } 00:15:51.593 }, 00:15:51.593 { 00:15:51.593 "method": "sock_impl_set_options", 00:15:51.593 "params": { 00:15:51.593 "enable_ktls": false, 00:15:51.593 "enable_placement_id": 0, 00:15:51.593 "enable_quickack": false, 00:15:51.593 "enable_recv_pipe": true, 00:15:51.593 "enable_zerocopy_send_client": false, 00:15:51.593 "enable_zerocopy_send_server": true, 00:15:51.593 "impl_name": "posix", 00:15:51.593 "recv_buf_size": 2097152, 00:15:51.593 "send_buf_size": 2097152, 00:15:51.593 "tls_version": 0, 00:15:51.593 "zerocopy_threshold": 0 00:15:51.593 } 00:15:51.593 } 00:15:51.593 ] 00:15:51.593 }, 00:15:51.593 { 00:15:51.593 "subsystem": "vmd", 00:15:51.593 "config": [] 00:15:51.593 }, 00:15:51.593 { 00:15:51.593 "subsystem": "accel", 00:15:51.593 "config": [ 00:15:51.593 { 00:15:51.593 "method": "accel_set_options", 00:15:51.593 "params": { 00:15:51.593 "buf_count": 2048, 00:15:51.593 "large_cache_size": 16, 00:15:51.593 "sequence_count": 2048, 00:15:51.593 "small_cache_size": 128, 00:15:51.593 "task_count": 2048 00:15:51.593 } 00:15:51.593 } 00:15:51.593 ] 00:15:51.593 }, 00:15:51.593 { 00:15:51.593 "subsystem": "bdev", 00:15:51.593 "config": [ 00:15:51.593 { 00:15:51.593 "method": "bdev_set_options", 00:15:51.593 "params": { 00:15:51.593 "bdev_auto_examine": true, 00:15:51.593 "bdev_io_cache_size": 256, 00:15:51.593 "bdev_io_pool_size": 65535, 00:15:51.593 "iobuf_large_cache_size": 16, 00:15:51.593 "iobuf_small_cache_size": 128 00:15:51.593 } 00:15:51.593 }, 00:15:51.593 { 00:15:51.593 "method": "bdev_raid_set_options", 00:15:51.593 "params": { 00:15:51.593 "process_window_size_kb": 1024 00:15:51.593 } 00:15:51.593 }, 00:15:51.593 { 00:15:51.593 "method": "bdev_iscsi_set_options", 00:15:51.593 "params": { 00:15:51.593 "timeout_sec": 30 00:15:51.593 } 00:15:51.593 }, 00:15:51.593 { 00:15:51.593 "method": "bdev_nvme_set_options", 00:15:51.593 "params": { 00:15:51.593 "action_on_timeout": "none", 00:15:51.593 "allow_accel_sequence": false, 00:15:51.593 "arbitration_burst": 0, 00:15:51.593 "bdev_retry_count": 3, 00:15:51.593 "ctrlr_loss_timeout_sec": 0, 00:15:51.593 "delay_cmd_submit": true, 00:15:51.593 "dhchap_dhgroups": [ 00:15:51.593 "null", 00:15:51.593 "ffdhe2048", 00:15:51.593 "ffdhe3072", 00:15:51.593 "ffdhe4096", 00:15:51.593 "ffdhe6144", 00:15:51.593 "ffdhe8192" 00:15:51.593 ], 00:15:51.593 "dhchap_digests": [ 00:15:51.593 "sha256", 00:15:51.593 "sha384", 00:15:51.593 "sha512" 00:15:51.593 ], 00:15:51.593 "disable_auto_failback": false, 00:15:51.593 "fast_io_fail_timeout_sec": 0, 00:15:51.593 "generate_uuids": false, 00:15:51.593 "high_priority_weight": 0, 00:15:51.593 "io_path_stat": false, 00:15:51.593 "io_queue_requests": 0, 00:15:51.593 "keep_alive_timeout_ms": 10000, 00:15:51.593 "low_priority_weight": 0, 00:15:51.593 "medium_priority_weight": 0, 00:15:51.593 "nvme_adminq_poll_period_us": 10000, 00:15:51.593 "nvme_error_stat": false, 00:15:51.593 "nvme_ioq_poll_period_us": 0, 00:15:51.594 "rdma_cm_event_timeout_ms": 0, 00:15:51.594 "rdma_max_cq_size": 0, 00:15:51.594 "rdma_srq_size": 0, 00:15:51.594 "reconnect_delay_sec": 0, 00:15:51.594 "timeout_admin_us": 0, 00:15:51.594 "timeout_us": 0, 00:15:51.594 "transport_ack_timeout": 0, 00:15:51.594 "transport_retry_count": 4, 00:15:51.594 "transport_tos": 0 00:15:51.594 } 00:15:51.594 }, 00:15:51.594 { 00:15:51.594 "method": "bdev_nvme_set_hotplug", 00:15:51.594 "params": { 00:15:51.594 "enable": false, 00:15:51.594 "period_us": 100000 00:15:51.594 } 00:15:51.594 }, 00:15:51.594 { 00:15:51.594 "method": "bdev_malloc_create", 00:15:51.594 "params": { 00:15:51.594 "block_size": 4096, 00:15:51.594 "name": "malloc0", 00:15:51.594 "num_blocks": 8192, 00:15:51.594 "optimal_io_boundary": 0, 00:15:51.594 "physical_block_size": 4096, 00:15:51.594 "uuid": "c11e9e76-c59b-4cfa-9cec-5cfbc021d32d" 00:15:51.594 } 00:15:51.594 }, 00:15:51.594 { 00:15:51.594 "method": "bdev_wait_for_examine" 00:15:51.594 } 00:15:51.594 ] 00:15:51.594 }, 00:15:51.594 { 00:15:51.594 "subsystem": "nbd", 00:15:51.594 "config": [] 00:15:51.594 }, 00:15:51.594 { 00:15:51.594 "subsystem": "scheduler", 00:15:51.594 "config": [ 00:15:51.594 { 00:15:51.594 "method": "framework_set_scheduler", 00:15:51.594 "params": { 00:15:51.594 "name": "static" 00:15:51.594 } 00:15:51.594 } 00:15:51.594 ] 00:15:51.594 }, 00:15:51.594 { 00:15:51.594 "subsystem": "nvmf", 00:15:51.594 "config": [ 00:15:51.594 { 00:15:51.594 "method": "nvmf_set_config", 00:15:51.594 "params": { 00:15:51.594 "admin_cmd_passthru": { 00:15:51.594 "identify_ctrlr": false 00:15:51.594 }, 00:15:51.594 "discovery_filter": "match_any" 00:15:51.594 } 00:15:51.594 }, 00:15:51.594 { 00:15:51.594 "method": "nvmf_set_max_subsystems", 00:15:51.594 "params": { 00:15:51.594 "max_subsystems": 1024 00:15:51.594 } 00:15:51.594 }, 00:15:51.594 { 00:15:51.594 "method": "nvmf_set_crdt", 00:15:51.594 "params": { 00:15:51.594 "crdt1": 0, 00:15:51.594 "crdt2": 0, 00:15:51.594 "crdt3": 0 00:15:51.594 } 00:15:51.594 }, 00:15:51.594 { 00:15:51.594 "method": "nvmf_create_transport", 00:15:51.594 "params": { 00:15:51.594 "abort_timeout_sec": 1, 00:15:51.594 "ack_timeout": 0, 00:15:51.594 "buf_cache_size": 4294967295, 00:15:51.594 "c2h_success": false, 00:15:51.594 "data_wr_pool_size": 0, 00:15:51.594 "dif_insert_or_strip": false, 00:15:51.594 "in_capsule_data_size": 4096, 00:15:51.594 "io_unit_size": 131072, 00:15:51.594 "max_aq_depth": 128, 00:15:51.594 "max_io_qpairs_per_ctrlr": 127, 00:15:51.594 "max_io_size": 131072, 00:15:51.594 "max_queue_depth": 128, 00:15:51.594 "num_shared_buffers": 511, 00:15:51.594 "sock_priority": 0, 00:15:51.594 "trtype": "TCP", 00:15:51.594 "zcopy": false 00:15:51.594 } 00:15:51.594 }, 00:15:51.594 { 00:15:51.594 "method": "nvmf_create_subsystem", 00:15:51.594 "params": { 00:15:51.594 "allow_any_host": false, 00:15:51.594 "ana_reporting": false, 00:15:51.594 "max_cntlid": 65519, 00:15:51.594 "max_namespaces": 32, 00:15:51.594 "min_cntlid": 1, 00:15:51.594 "model_number": "SPDK bdev Controller", 00:15:51.594 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.594 "serial_number": "00000000000000000000" 00:15:51.594 } 00:15:51.594 }, 00:15:51.594 { 00:15:51.594 "method": "nvmf_subsystem_add_host", 00:15:51.594 "params": { 00:15:51.594 "host": "nqn.2016-06.io.spdk:host1", 00:15:51.594 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.594 "psk": "key0" 00:15:51.594 } 00:15:51.594 }, 00:15:51.594 { 00:15:51.594 "method": "nvmf_subsystem_add_ns", 00:15:51.594 "params": { 00:15:51.594 "namespace": { 00:15:51.594 "bdev_name": "malloc0", 00:15:51.594 "nguid": "C11E9E76C59B4CFA9CEC5CFBC021D32D", 00:15:51.594 "no_auto_visible": false, 00:15:51.594 "nsid": 1, 00:15:51.594 "uuid": "c11e9e76-c59b-4cfa-9cec-5cfbc021d32d" 00:15:51.594 }, 00:15:51.594 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:51.594 } 00:15:51.594 }, 00:15:51.594 { 00:15:51.594 "method": "nvmf_subsystem_add_listener", 00:15:51.594 "params": { 00:15:51.594 "listen_address": { 00:15:51.594 "adrfam": "IPv4", 00:15:51.594 "traddr": "10.0.0.2", 00:15:51.594 "trsvcid": "4420", 00:15:51.594 "trtype": "TCP" 00:15:51.594 }, 00:15:51.594 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.594 "secure_channel": false, 00:15:51.594 "sock_impl": "ssl" 00:15:51.594 } 00:15:51.594 } 00:15:51.594 ] 00:15:51.594 } 00:15:51.594 ] 00:15:51.594 }' 00:15:51.594 20:32:12 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:51.853 20:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:15:51.853 "subsystems": [ 00:15:51.853 { 00:15:51.853 "subsystem": "keyring", 00:15:51.853 "config": [ 00:15:51.853 { 00:15:51.853 "method": "keyring_file_add_key", 00:15:51.853 "params": { 00:15:51.853 "name": "key0", 00:15:51.853 "path": "/tmp/tmp.q2MvlIhZ62" 00:15:51.853 } 00:15:51.853 } 00:15:51.853 ] 00:15:51.853 }, 00:15:51.853 { 00:15:51.853 "subsystem": "iobuf", 00:15:51.853 "config": [ 00:15:51.853 { 00:15:51.853 "method": "iobuf_set_options", 00:15:51.853 "params": { 00:15:51.853 "large_bufsize": 135168, 00:15:51.853 "large_pool_count": 1024, 00:15:51.853 "small_bufsize": 8192, 00:15:51.853 "small_pool_count": 8192 00:15:51.853 } 00:15:51.853 } 00:15:51.853 ] 00:15:51.853 }, 00:15:51.853 { 00:15:51.853 "subsystem": "sock", 00:15:51.853 "config": [ 00:15:51.853 { 00:15:51.853 "method": "sock_set_default_impl", 00:15:51.853 "params": { 00:15:51.853 "impl_name": "posix" 00:15:51.853 } 00:15:51.853 }, 00:15:51.853 { 00:15:51.853 "method": "sock_impl_set_options", 00:15:51.853 "params": { 00:15:51.853 "enable_ktls": false, 00:15:51.853 "enable_placement_id": 0, 00:15:51.853 "enable_quickack": false, 00:15:51.853 "enable_recv_pipe": true, 00:15:51.853 "enable_zerocopy_send_client": false, 00:15:51.853 "enable_zerocopy_send_server": true, 00:15:51.853 "impl_name": "ssl", 00:15:51.853 "recv_buf_size": 4096, 00:15:51.853 "send_buf_size": 4096, 00:15:51.853 "tls_version": 0, 00:15:51.853 "zerocopy_threshold": 0 00:15:51.853 } 00:15:51.853 }, 00:15:51.853 { 00:15:51.853 "method": "sock_impl_set_options", 00:15:51.853 "params": { 00:15:51.853 "enable_ktls": false, 00:15:51.853 "enable_placement_id": 0, 00:15:51.853 "enable_quickack": false, 00:15:51.853 "enable_recv_pipe": true, 00:15:51.853 "enable_zerocopy_send_client": false, 00:15:51.853 "enable_zerocopy_send_server": true, 00:15:51.853 "impl_name": "posix", 00:15:51.853 "recv_buf_size": 2097152, 00:15:51.853 "send_buf_size": 2097152, 00:15:51.853 "tls_version": 0, 00:15:51.853 "zerocopy_threshold": 0 00:15:51.853 } 00:15:51.853 } 00:15:51.853 ] 00:15:51.853 }, 00:15:51.853 { 00:15:51.853 "subsystem": "vmd", 00:15:51.853 "config": [] 00:15:51.853 }, 00:15:51.853 { 00:15:51.853 "subsystem": "accel", 00:15:51.853 "config": [ 00:15:51.853 { 00:15:51.853 "method": "accel_set_options", 00:15:51.853 "params": { 00:15:51.853 "buf_count": 2048, 00:15:51.853 "large_cache_size": 16, 00:15:51.853 "sequence_count": 2048, 00:15:51.853 "small_cache_size": 128, 00:15:51.853 "task_count": 2048 00:15:51.853 } 00:15:51.853 } 00:15:51.853 ] 00:15:51.853 }, 00:15:51.853 { 00:15:51.853 "subsystem": "bdev", 00:15:51.853 "config": [ 00:15:51.853 { 00:15:51.853 "method": "bdev_set_options", 00:15:51.853 "params": { 00:15:51.853 "bdev_auto_examine": true, 00:15:51.853 "bdev_io_cache_size": 256, 00:15:51.853 "bdev_io_pool_size": 65535, 00:15:51.853 "iobuf_large_cache_size": 16, 00:15:51.853 "iobuf_small_cache_size": 128 00:15:51.853 } 00:15:51.853 }, 00:15:51.853 { 00:15:51.853 "method": "bdev_raid_set_options", 00:15:51.853 "params": { 00:15:51.853 "process_window_size_kb": 1024 00:15:51.853 } 00:15:51.853 }, 00:15:51.853 { 00:15:51.853 "method": "bdev_iscsi_set_options", 00:15:51.853 "params": { 00:15:51.853 "timeout_sec": 30 00:15:51.853 } 00:15:51.853 }, 00:15:51.853 { 00:15:51.853 "method": "bdev_nvme_set_options", 00:15:51.853 "params": { 00:15:51.853 "action_on_timeout": "none", 00:15:51.853 "allow_accel_sequence": false, 00:15:51.853 "arbitration_burst": 0, 00:15:51.853 "bdev_retry_count": 3, 00:15:51.853 "ctrlr_loss_timeout_sec": 0, 00:15:51.853 "delay_cmd_submit": true, 00:15:51.853 "dhchap_dhgroups": [ 00:15:51.853 "null", 00:15:51.853 "ffdhe2048", 00:15:51.853 "ffdhe3072", 00:15:51.853 "ffdhe4096", 00:15:51.853 "ffdhe6144", 00:15:51.853 "ffdhe8192" 00:15:51.853 ], 00:15:51.853 "dhchap_digests": [ 00:15:51.853 "sha256", 00:15:51.853 "sha384", 00:15:51.853 "sha512" 00:15:51.853 ], 00:15:51.853 "disable_auto_failback": false, 00:15:51.853 "fast_io_fail_timeout_sec": 0, 00:15:51.853 "generate_uuids": false, 00:15:51.853 "high_priority_weight": 0, 00:15:51.853 "io_path_stat": false, 00:15:51.853 "io_queue_requests": 512, 00:15:51.853 "keep_alive_timeout_ms": 10000, 00:15:51.853 "low_priority_weight": 0, 00:15:51.853 "medium_priority_weight": 0, 00:15:51.853 "nvme_adminq_poll_period_us": 10000, 00:15:51.853 "nvme_error_stat": false, 00:15:51.853 "nvme_ioq_poll_period_us": 0, 00:15:51.853 "rdma_cm_event_timeout_ms": 0, 00:15:51.853 "rdma_max_cq_size": 0, 00:15:51.853 "rdma_srq_size": 0, 00:15:51.853 "reconnect_delay_sec": 0, 00:15:51.853 "timeout_admin_us": 0, 00:15:51.853 "timeout_us": 0, 00:15:51.853 "transport_ack_timeout": 0, 00:15:51.853 "transport_retry_count": 4, 00:15:51.853 "transport_tos": 0 00:15:51.853 } 00:15:51.853 }, 00:15:51.853 { 00:15:51.853 "method": "bdev_nvme_attach_controller", 00:15:51.853 "params": { 00:15:51.853 "adrfam": "IPv4", 00:15:51.853 "ctrlr_loss_timeout_sec": 0, 00:15:51.853 "ddgst": false, 00:15:51.853 "fast_io_fail_timeout_sec": 0, 00:15:51.853 "hdgst": false, 00:15:51.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:51.853 "name": "nvme0", 00:15:51.853 "prchk_guard": false, 00:15:51.853 "prchk_reftag": false, 00:15:51.853 "psk": "key0", 00:15:51.853 "reconnect_delay_sec": 0, 00:15:51.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.853 "traddr": "10.0.0.2", 00:15:51.853 "trsvcid": "4420", 00:15:51.853 "trtype": "TCP" 00:15:51.853 } 00:15:51.853 }, 00:15:51.853 { 00:15:51.853 "method": "bdev_nvme_set_hotplug", 00:15:51.853 "params": { 00:15:51.853 "enable": false, 00:15:51.853 "period_us": 100000 00:15:51.853 } 00:15:51.853 }, 00:15:51.853 { 00:15:51.853 "method": "bdev_enable_histogram", 00:15:51.853 "params": { 00:15:51.853 "enable": true, 00:15:51.853 "name": "nvme0n1" 00:15:51.853 } 00:15:51.853 }, 00:15:51.853 { 00:15:51.853 "method": "bdev_wait_for_examine" 00:15:51.853 } 00:15:51.853 ] 00:15:51.853 }, 00:15:51.853 { 00:15:51.853 "subsystem": "nbd", 00:15:51.853 "config": [] 00:15:51.853 } 00:15:51.853 ] 00:15:51.853 }' 00:15:51.853 20:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 85048 00:15:51.853 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85048 ']' 00:15:51.853 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85048 00:15:51.853 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:51.853 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:51.853 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85048 00:15:51.853 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:51.853 killing process with pid 85048 00:15:51.853 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:51.853 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85048' 00:15:51.853 Received shutdown signal, test time was about 1.000000 seconds 00:15:51.853 00:15:51.853 Latency(us) 00:15:51.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.853 =================================================================================================================== 00:15:51.854 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:51.854 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85048 00:15:51.854 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85048 00:15:52.111 20:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 84998 00:15:52.111 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 84998 ']' 00:15:52.111 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 84998 00:15:52.111 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:52.111 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:52.111 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84998 00:15:52.112 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:52.112 killing process with pid 84998 00:15:52.112 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:52.112 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84998' 00:15:52.112 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 84998 00:15:52.112 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 84998 00:15:52.112 20:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:15:52.112 20:32:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:52.112 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:52.112 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:52.112 20:32:13 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:15:52.112 "subsystems": [ 00:15:52.112 { 00:15:52.112 "subsystem": "keyring", 00:15:52.112 "config": [ 00:15:52.112 { 00:15:52.112 "method": "keyring_file_add_key", 00:15:52.112 "params": { 00:15:52.112 "name": "key0", 00:15:52.112 "path": "/tmp/tmp.q2MvlIhZ62" 00:15:52.112 } 00:15:52.112 } 00:15:52.112 ] 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "subsystem": "iobuf", 00:15:52.112 "config": [ 00:15:52.112 { 00:15:52.112 "method": "iobuf_set_options", 00:15:52.112 "params": { 00:15:52.112 "large_bufsize": 135168, 00:15:52.112 "large_pool_count": 1024, 00:15:52.112 "small_bufsize": 8192, 00:15:52.112 "small_pool_count": 8192 00:15:52.112 } 00:15:52.112 } 00:15:52.112 ] 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "subsystem": "sock", 00:15:52.112 "config": [ 00:15:52.112 { 00:15:52.112 "method": "sock_set_default_impl", 00:15:52.112 "params": { 00:15:52.112 "impl_name": "posix" 00:15:52.112 } 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "method": "sock_impl_set_options", 00:15:52.112 "params": { 00:15:52.112 "enable_ktls": false, 00:15:52.112 "enable_placement_id": 0, 00:15:52.112 "enable_quickack": false, 00:15:52.112 "enable_recv_pipe": true, 00:15:52.112 "enable_zerocopy_send_client": false, 00:15:52.112 "enable_zerocopy_send_server": true, 00:15:52.112 "impl_name": "ssl", 00:15:52.112 "recv_buf_size": 4096, 00:15:52.112 "send_buf_size": 4096, 00:15:52.112 "tls_version": 0, 00:15:52.112 "zerocopy_threshold": 0 00:15:52.112 } 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "method": "sock_impl_set_options", 00:15:52.112 "params": { 00:15:52.112 "enable_ktls": false, 00:15:52.112 "enable_placement_id": 0, 00:15:52.112 "enable_quickack": false, 00:15:52.112 "enable_recv_pipe": true, 00:15:52.112 "enable_zerocopy_send_client": false, 00:15:52.112 "enable_zerocopy_send_server": true, 00:15:52.112 "impl_name": "posix", 00:15:52.112 "recv_buf_size": 2097152, 00:15:52.112 "send_buf_size": 2097152, 00:15:52.112 "tls_version": 0, 00:15:52.112 "zerocopy_threshold": 0 00:15:52.112 } 00:15:52.112 } 00:15:52.112 ] 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "subsystem": "vmd", 00:15:52.112 "config": [] 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "subsystem": "accel", 00:15:52.112 "config": [ 00:15:52.112 { 00:15:52.112 "method": "accel_set_options", 00:15:52.112 "params": { 00:15:52.112 "buf_count": 2048, 00:15:52.112 "large_cache_size": 16, 00:15:52.112 "sequence_count": 2048, 00:15:52.112 "small_cache_size": 128, 00:15:52.112 "task_count": 2048 00:15:52.112 } 00:15:52.112 } 00:15:52.112 ] 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "subsystem": "bdev", 00:15:52.112 "config": [ 00:15:52.112 { 00:15:52.112 "method": "bdev_set_options", 00:15:52.112 "params": { 00:15:52.112 "bdev_auto_examine": true, 00:15:52.112 "bdev_io_cache_size": 256, 00:15:52.112 "bdev_io_pool_size": 65535, 00:15:52.112 "iobuf_large_cache_size": 16, 00:15:52.112 "iobuf_small_cache_size": 128 00:15:52.112 } 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "method": "bdev_raid_set_options", 00:15:52.112 "params": { 00:15:52.112 "process_window_size_kb": 1024 00:15:52.112 } 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "method": "bdev_iscsi_set_options", 00:15:52.112 "params": { 00:15:52.112 "timeout_sec": 30 00:15:52.112 } 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "method": "bdev_nvme_set_options", 00:15:52.112 "params": { 00:15:52.112 "action_on_timeout": "none", 00:15:52.112 "allow_accel_sequence": false, 00:15:52.112 "arbitration_burst": 0, 00:15:52.112 "bdev_retry_count": 3, 00:15:52.112 "ctrlr_loss_timeout_sec": 0, 00:15:52.112 "delay_cmd_submit": true, 00:15:52.112 "dhchap_dhgroups": [ 00:15:52.112 "null", 00:15:52.112 "ffdhe2048", 00:15:52.112 "ffdhe3072", 00:15:52.112 "ffdhe4096", 00:15:52.112 "ffdhe6144", 00:15:52.112 "ffdhe8192" 00:15:52.112 ], 00:15:52.112 "dhchap_digests": [ 00:15:52.112 "sha256", 00:15:52.112 "sha384", 00:15:52.112 "sha512" 00:15:52.112 ], 00:15:52.112 "disable_auto_failback": false, 00:15:52.112 "fast_io_fail_timeout_sec": 0, 00:15:52.112 "generate_uuids": false, 00:15:52.112 "high_priority_weight": 0, 00:15:52.112 "io_path_stat": false, 00:15:52.112 "io_queue_requests": 0, 00:15:52.112 "keep_alive_timeout_ms": 10000, 00:15:52.112 "low_priority_weight": 0, 00:15:52.112 "medium_priority_weight": 0, 00:15:52.112 "nvme_adminq_poll_period_us": 10000, 00:15:52.112 "nvme_error_stat": false, 00:15:52.112 "nvme_ioq_poll_period_us": 0, 00:15:52.112 "rdma_cm_event_timeout_ms": 0, 00:15:52.112 "rdma_max_cq_size": 0, 00:15:52.112 "rdma_srq_size": 0, 00:15:52.112 "reconnect_delay_sec": 0, 00:15:52.112 "timeout_admin_us": 0, 00:15:52.112 "timeout_us": 0, 00:15:52.112 "transport_ack_timeout": 0, 00:15:52.112 "transport_retry_count": 4, 00:15:52.112 "transport_tos": 0 00:15:52.112 } 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "method": "bdev_nvme_set_hotplug", 00:15:52.112 "params": { 00:15:52.112 "enable": false, 00:15:52.112 "period_us": 100000 00:15:52.112 } 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "method": "bdev_malloc_create", 00:15:52.112 "params": { 00:15:52.112 "block_size": 4096, 00:15:52.112 "name": "malloc0", 00:15:52.112 "num_blocks": 8192, 00:15:52.112 "optimal_io_boundary": 0, 00:15:52.112 "physical_block_size": 4096, 00:15:52.112 "uuid": "c11e9e76-c59b-4cfa-9cec-5cfbc021d32d" 00:15:52.112 } 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "method": "bdev_wait_for_examine" 00:15:52.112 } 00:15:52.112 ] 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "subsystem": "nbd", 00:15:52.112 "config": [] 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "subsystem": "scheduler", 00:15:52.112 "config": [ 00:15:52.112 { 00:15:52.112 "method": "framework_set_scheduler", 00:15:52.112 "params": { 00:15:52.112 "name": "static" 00:15:52.112 } 00:15:52.112 } 00:15:52.112 ] 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "subsystem": "nvmf", 00:15:52.112 "config": [ 00:15:52.112 { 00:15:52.112 "method": "nvmf_set_config", 00:15:52.112 "params": { 00:15:52.112 "admin_cmd_passthru": { 00:15:52.112 "identify_ctrlr": false 00:15:52.112 }, 00:15:52.112 "discovery_filter": "match_any" 00:15:52.112 } 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "method": "nvmf_set_max_subsystems", 00:15:52.112 "params": { 00:15:52.112 "max_subsystems": 1024 00:15:52.112 } 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "method": "nvmf_set_crdt", 00:15:52.112 "params": { 00:15:52.112 "crdt1": 0, 00:15:52.112 "crdt2": 0, 00:15:52.112 "crdt3": 0 00:15:52.112 } 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "method": "nvmf_create_transport", 00:15:52.112 "params": { 00:15:52.112 "abort_timeout_sec": 1, 00:15:52.112 "ack_timeout": 0, 00:15:52.112 "buf_cache_size": 4294967295, 00:15:52.112 "c2h_success": false, 00:15:52.112 "data_wr_pool_size": 0, 00:15:52.112 "dif_insert_or_strip": false, 00:15:52.112 "in_capsule_data_size": 4096, 00:15:52.112 "io_unit_size": 131072, 00:15:52.112 "max_aq_depth": 128, 00:15:52.112 "max_io_qpairs_per_ctrlr": 127, 00:15:52.112 "max_io_size": 131072, 00:15:52.112 "max_queue_depth": 128, 00:15:52.112 "num_shared_buffers": 511, 00:15:52.112 "sock_priority": 0, 00:15:52.112 "trtype": "TCP", 00:15:52.112 "zcopy": false 00:15:52.112 } 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "method": "nvmf_create_subsystem", 00:15:52.112 "params": { 00:15:52.112 "allow_any_host": false, 00:15:52.112 "ana_reporting": false, 00:15:52.112 "max_cntlid": 65519, 00:15:52.112 "max_namespaces": 32, 00:15:52.112 "min_cntlid": 1, 00:15:52.112 "model_number": "SPDK bdev Controller", 00:15:52.112 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.112 "serial_number": "00000000000000000000" 00:15:52.112 } 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "method": "nvmf_subsystem_add_host", 00:15:52.112 "params": { 00:15:52.112 "host": "nqn.2016-06.io.spdk:host1", 00:15:52.112 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.112 "psk": "key0" 00:15:52.112 } 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "method": "nvmf_subsystem_add_ns", 00:15:52.112 "params": { 00:15:52.112 "namespace": { 00:15:52.112 "bdev_name": "malloc0", 00:15:52.112 "nguid": "C11E9E76C59B4CFA9CEC5CFBC021D32D", 00:15:52.112 "no_auto_visible": false, 00:15:52.112 "nsid": 1, 00:15:52.112 "uuid": "c11e9e76-c59b-4cfa-9cec-5cfbc021d32d" 00:15:52.112 }, 00:15:52.112 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:15:52.112 } 00:15:52.112 }, 00:15:52.112 { 00:15:52.112 "method": "nvmf_subsystem_add_listener", 00:15:52.112 "params": { 00:15:52.112 "listen_address": { 00:15:52.112 "adrfam": "IPv4", 00:15:52.112 "traddr": "10.0.0.2", 00:15:52.112 "trsvcid": "4420", 00:15:52.113 "trtype": "TCP" 00:15:52.113 }, 00:15:52.113 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:52.113 "secure_channel": false, 00:15:52.113 "sock_impl": "ssl" 00:15:52.113 } 00:15:52.113 } 00:15:52.113 ] 00:15:52.113 } 00:15:52.113 ] 00:15:52.113 }' 00:15:52.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.370 20:32:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=85141 00:15:52.370 20:32:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 85141 00:15:52.370 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85141 ']' 00:15:52.370 20:32:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:52.370 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.370 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:52.370 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.370 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:52.370 20:32:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:52.370 [2024-07-15 20:32:13.666606] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:52.370 [2024-07-15 20:32:13.666697] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.370 [2024-07-15 20:32:13.802153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.370 [2024-07-15 20:32:13.862079] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.370 [2024-07-15 20:32:13.862132] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.370 [2024-07-15 20:32:13.862143] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.370 [2024-07-15 20:32:13.862152] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.370 [2024-07-15 20:32:13.862159] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.370 [2024-07-15 20:32:13.862247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.627 [2024-07-15 20:32:14.053376] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.627 [2024-07-15 20:32:14.085307] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:52.627 [2024-07-15 20:32:14.085537] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.561 20:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:53.561 20:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:53.561 20:32:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:53.561 20:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:53.561 20:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:53.561 20:32:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.561 20:32:14 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=85186 00:15:53.561 20:32:14 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 85186 /var/tmp/bdevperf.sock 00:15:53.561 20:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 85186 ']' 00:15:53.561 20:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:53.561 20:32:14 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:53.561 20:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:53.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:53.561 20:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:53.561 20:32:14 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:15:53.561 "subsystems": [ 00:15:53.561 { 00:15:53.561 "subsystem": "keyring", 00:15:53.561 "config": [ 00:15:53.561 { 00:15:53.561 "method": "keyring_file_add_key", 00:15:53.561 "params": { 00:15:53.561 "name": "key0", 00:15:53.561 "path": "/tmp/tmp.q2MvlIhZ62" 00:15:53.561 } 00:15:53.561 } 00:15:53.561 ] 00:15:53.561 }, 00:15:53.561 { 00:15:53.561 "subsystem": "iobuf", 00:15:53.561 "config": [ 00:15:53.561 { 00:15:53.562 "method": "iobuf_set_options", 00:15:53.562 "params": { 00:15:53.562 "large_bufsize": 135168, 00:15:53.562 "large_pool_count": 1024, 00:15:53.562 "small_bufsize": 8192, 00:15:53.562 "small_pool_count": 8192 00:15:53.562 } 00:15:53.562 } 00:15:53.562 ] 00:15:53.562 }, 00:15:53.562 { 00:15:53.562 "subsystem": "sock", 00:15:53.562 "config": [ 00:15:53.562 { 00:15:53.562 "method": "sock_set_default_impl", 00:15:53.562 "params": { 00:15:53.562 "impl_name": "posix" 00:15:53.562 } 00:15:53.562 }, 00:15:53.562 { 00:15:53.562 "method": "sock_impl_set_options", 00:15:53.562 "params": { 00:15:53.562 "enable_ktls": false, 00:15:53.562 "enable_placement_id": 0, 00:15:53.562 "enable_quickack": false, 00:15:53.562 "enable_recv_pipe": true, 00:15:53.562 "enable_zerocopy_send_client": false, 00:15:53.562 "enable_zerocopy_send_server": true, 00:15:53.562 "impl_name": "ssl", 00:15:53.562 "recv_buf_size": 4096, 00:15:53.562 "send_buf_size": 4096, 00:15:53.562 "tls_version": 0, 00:15:53.562 "zerocopy_threshold": 0 00:15:53.562 } 00:15:53.562 }, 00:15:53.562 { 00:15:53.562 "method": "sock_impl_set_options", 00:15:53.562 "params": { 00:15:53.562 "enable_ktls": false, 00:15:53.562 "enable_placement_id": 0, 00:15:53.562 "enable_quickack": false, 00:15:53.562 "enable_recv_pipe": true, 00:15:53.562 "enable_zerocopy_send_client": false, 00:15:53.562 "enable_zerocopy_send_server": true, 00:15:53.562 "impl_name": "posix", 00:15:53.562 "recv_buf_size": 2097152, 00:15:53.562 "send_buf_size": 2097152, 00:15:53.562 "tls_version": 0, 00:15:53.562 "zerocopy_threshold": 0 00:15:53.562 } 00:15:53.562 } 00:15:53.562 ] 00:15:53.562 }, 00:15:53.562 { 00:15:53.562 "subsystem": "vmd", 00:15:53.562 "config": [] 00:15:53.562 }, 00:15:53.562 { 00:15:53.562 "subsystem": "accel", 00:15:53.562 "config": [ 00:15:53.562 { 00:15:53.562 "method": "accel_set_options", 00:15:53.562 "params": { 00:15:53.562 "buf_count": 2048, 00:15:53.562 "large_cache_size": 16, 00:15:53.562 "sequence_count": 2048, 00:15:53.562 "small_cache_size": 128, 00:15:53.562 "task_count": 2048 00:15:53.562 } 00:15:53.562 } 00:15:53.562 ] 00:15:53.562 }, 00:15:53.562 { 00:15:53.562 "subsystem": "bdev", 00:15:53.562 "config": [ 00:15:53.562 { 00:15:53.562 "method": "bdev_set_options", 00:15:53.562 "params": { 00:15:53.562 "bdev_auto_examine": true, 00:15:53.562 "bdev_io_cache_size": 256, 00:15:53.562 "bdev_io_pool_size": 65535, 00:15:53.562 "iobuf_large_cache_size": 16, 00:15:53.562 "iobuf_small_cache_size": 128 00:15:53.562 } 00:15:53.562 }, 00:15:53.562 { 00:15:53.562 "method": "bdev_raid_set_options", 00:15:53.562 "params": { 00:15:53.562 "process_window_size_kb": 1024 00:15:53.562 } 00:15:53.562 }, 00:15:53.562 { 00:15:53.562 "method": "bdev_iscsi_set_options", 00:15:53.562 "params": { 00:15:53.562 "timeout_sec": 30 00:15:53.562 } 00:15:53.562 }, 00:15:53.562 { 00:15:53.562 "method": "bdev_nvme_set_options", 00:15:53.562 "params": { 00:15:53.562 "action_on_timeout": "none", 00:15:53.562 "allow_accel_sequence": false, 00:15:53.562 "arbitration_burst": 0, 00:15:53.562 "bdev_retry_count": 3, 00:15:53.562 "ctrlr_loss_timeout_sec": 0, 00:15:53.562 "delay_cmd_submit": true, 00:15:53.562 "dhchap_dhgroups": [ 00:15:53.562 "null", 00:15:53.562 "ffdhe2048", 00:15:53.562 "ffdhe3072", 00:15:53.562 "ffdhe4096", 00:15:53.562 "ffdhe6144", 00:15:53.562 "ffdhe8192" 00:15:53.562 ], 00:15:53.562 "dhchap_digests": [ 00:15:53.562 "sha256", 00:15:53.562 "sha384", 00:15:53.562 "sha512" 00:15:53.562 ], 00:15:53.562 "disable_auto_failback": false, 00:15:53.562 "fast_io_fail_timeout_sec": 0, 00:15:53.562 "generate_uuids": false, 00:15:53.562 "high_priority_weight": 0, 00:15:53.562 "io_path_stat": false, 00:15:53.562 "io_queue_requests": 512, 00:15:53.562 "keep_alive_timeout_ms": 10000, 00:15:53.562 "low_priority_weight": 0, 00:15:53.562 "medium_priority_weight": 0, 00:15:53.562 "nvme_adminq_poll_period_us": 10000, 00:15:53.562 "nvme_error_stat": false, 00:15:53.562 "nvme_ioq_poll_period_us": 0, 00:15:53.562 "rdma_cm_event_timeout_ms": 0, 00:15:53.562 "rdma_max_cq_size": 0, 00:15:53.562 "rdma_srq_size": 0, 00:15:53.562 "reconnect_delay_sec": 0, 00:15:53.562 "timeout_admin_us": 0, 00:15:53.562 "timeout_us": 0, 00:15:53.562 "transport_ack_timeout": 0, 00:15:53.562 "transport_retry_count": 4, 00:15:53.562 "transport_tos": 0 00:15:53.562 } 00:15:53.562 }, 00:15:53.562 { 00:15:53.562 "method": "bdev_nvme_attach_controller", 00:15:53.562 "params": { 00:15:53.562 "adrfam": "IPv4", 00:15:53.562 "ctrlr_loss_timeout_sec": 0, 00:15:53.562 "ddgst": false, 00:15:53.562 "fast_io_fail_timeout_sec": 0, 00:15:53.562 "hdgst": false, 00:15:53.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:53.562 "name": "nvme0", 00:15:53.562 "prchk_guard": false, 00:15:53.562 "prchk_reftag": false, 00:15:53.562 "psk": "key0", 00:15:53.562 "reconnect_delay_sec": 0, 00:15:53.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:53.562 "traddr": "10.0.0.2", 00:15:53.562 "trsvcid": "4420", 00:15:53.562 "trtype": "TCP" 00:15:53.562 } 00:15:53.562 }, 00:15:53.562 { 00:15:53.562 "method": "bdev_nvme_set_hotplug", 00:15:53.562 "params": { 00:15:53.562 "enable": false, 00:15:53.562 "period_us": 100000 00:15:53.562 } 00:15:53.562 }, 00:15:53.562 { 00:15:53.563 "method": "bdev_enable_histogram", 00:15:53.563 "params": { 00:15:53.563 "enable": true, 00:15:53.563 "name": "nvme0n1" 00:15:53.563 } 00:15:53.563 }, 00:15:53.563 { 00:15:53.563 "method": "bdev_wait_for_examine" 00:15:53.563 } 00:15:53.563 ] 00:15:53.563 }, 00:15:53.563 { 00:15:53.563 "subsystem": "nbd", 00:15:53.563 "config": [] 00:15:53.563 } 00:15:53.563 ] 00:15:53.563 }' 00:15:53.563 20:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:53.563 20:32:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:53.563 [2024-07-15 20:32:14.901375] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:53.563 [2024-07-15 20:32:14.901498] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85186 ] 00:15:53.563 [2024-07-15 20:32:15.040065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.821 [2024-07-15 20:32:15.128295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.821 [2024-07-15 20:32:15.260555] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:54.387 20:32:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:54.387 20:32:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:15:54.387 20:32:15 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:54.387 20:32:15 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:15:54.644 20:32:16 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.644 20:32:16 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:54.903 Running I/O for 1 seconds... 00:15:55.840 00:15:55.841 Latency(us) 00:15:55.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.841 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:55.841 Verification LBA range: start 0x0 length 0x2000 00:15:55.841 nvme0n1 : 1.02 3818.52 14.92 0.00 0.00 33166.62 6940.86 30980.65 00:15:55.841 =================================================================================================================== 00:15:55.841 Total : 3818.52 14.92 0.00 0.00 33166.62 6940.86 30980.65 00:15:55.841 0 00:15:55.841 20:32:17 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:15:55.841 20:32:17 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:15:55.841 20:32:17 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:55.841 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:15:55.841 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:15:55.841 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:55.841 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:55.841 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:55.841 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:55.841 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:55.841 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:55.841 nvmf_trace.0 00:15:56.099 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:15:56.100 20:32:17 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 85186 00:15:56.100 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85186 ']' 00:15:56.100 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85186 00:15:56.100 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:56.100 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:56.100 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85186 00:15:56.100 killing process with pid 85186 00:15:56.100 Received shutdown signal, test time was about 1.000000 seconds 00:15:56.100 00:15:56.100 Latency(us) 00:15:56.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.100 =================================================================================================================== 00:15:56.100 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:56.100 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:56.100 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:56.100 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85186' 00:15:56.100 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85186 00:15:56.100 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85186 00:15:56.100 20:32:17 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:56.100 20:32:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:56.100 20:32:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:15:56.100 20:32:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:56.100 20:32:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:15:56.100 20:32:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:56.100 20:32:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:56.100 rmmod nvme_tcp 00:15:56.358 rmmod nvme_fabrics 00:15:56.358 rmmod nvme_keyring 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 85141 ']' 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 85141 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 85141 ']' 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 85141 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85141 00:15:56.358 killing process with pid 85141 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85141' 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 85141 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 85141 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.358 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.617 20:32:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:56.617 20:32:17 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.qYPduh2kEd /tmp/tmp.L3lqx161nn /tmp/tmp.q2MvlIhZ62 00:15:56.617 ************************************ 00:15:56.617 END TEST nvmf_tls 00:15:56.617 ************************************ 00:15:56.617 00:15:56.617 real 1m25.615s 00:15:56.617 user 2m17.622s 00:15:56.617 sys 0m27.194s 00:15:56.617 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:56.617 20:32:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:56.617 20:32:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:56.617 20:32:17 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:56.617 20:32:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:56.617 20:32:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:56.617 20:32:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:56.617 ************************************ 00:15:56.617 START TEST nvmf_fips 00:15:56.617 ************************************ 00:15:56.617 20:32:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:56.617 * Looking for test storage... 00:15:56.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:56.617 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:56.617 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:56.617 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.617 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.617 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.617 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.617 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.617 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:15:56.618 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:15:56.877 Error setting digest 00:15:56.877 0012AAAF837F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:56.877 0012AAAF837F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:56.877 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:56.878 Cannot find device "nvmf_tgt_br" 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.878 Cannot find device "nvmf_tgt_br2" 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:56.878 Cannot find device "nvmf_tgt_br" 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:56.878 Cannot find device "nvmf_tgt_br2" 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.878 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:56.878 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:57.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:15:57.136 00:15:57.136 --- 10.0.0.2 ping statistics --- 00:15:57.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.136 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:57.136 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:57.136 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:15:57.136 00:15:57.136 --- 10.0.0.3 ping statistics --- 00:15:57.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.136 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:57.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:57.136 00:15:57.136 --- 10.0.0.1 ping statistics --- 00:15:57.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.136 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:57.136 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=85463 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 85463 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85463 ']' 00:15:57.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:57.137 20:32:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:57.394 [2024-07-15 20:32:18.677313] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:57.394 [2024-07-15 20:32:18.677415] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.394 [2024-07-15 20:32:18.813311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.394 [2024-07-15 20:32:18.888259] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.394 [2024-07-15 20:32:18.888356] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.394 [2024-07-15 20:32:18.888377] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.394 [2024-07-15 20:32:18.888392] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.394 [2024-07-15 20:32:18.888406] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.394 [2024-07-15 20:32:18.888458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.331 20:32:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.331 20:32:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:15:58.331 20:32:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:58.331 20:32:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:58.331 20:32:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:58.331 20:32:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.331 20:32:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:58.331 20:32:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:58.331 20:32:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:58.331 20:32:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:58.331 20:32:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:58.331 20:32:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:58.331 20:32:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:58.331 20:32:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:58.590 [2024-07-15 20:32:19.979474] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.590 [2024-07-15 20:32:19.995477] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:58.590 [2024-07-15 20:32:19.995706] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.590 [2024-07-15 20:32:20.024628] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:58.590 malloc0 00:15:58.590 20:32:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:58.590 20:32:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=85521 00:15:58.590 20:32:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:58.590 20:32:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 85521 /var/tmp/bdevperf.sock 00:15:58.590 20:32:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 85521 ']' 00:15:58.590 20:32:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:58.590 20:32:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:58.590 20:32:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:58.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:58.590 20:32:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:58.590 20:32:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:58.849 [2024-07-15 20:32:20.151793] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:15:58.849 [2024-07-15 20:32:20.151912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85521 ] 00:15:58.849 [2024-07-15 20:32:20.284015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.107 [2024-07-15 20:32:20.374494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:59.673 20:32:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:59.673 20:32:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:15:59.673 20:32:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:59.932 [2024-07-15 20:32:21.408006] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:59.932 [2024-07-15 20:32:21.408140] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:00.190 TLSTESTn1 00:16:00.190 20:32:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:00.190 Running I/O for 10 seconds... 00:16:12.387 00:16:12.387 Latency(us) 00:16:12.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.387 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:12.387 Verification LBA range: start 0x0 length 0x2000 00:16:12.387 TLSTESTn1 : 10.02 3506.38 13.70 0.00 0.00 36431.12 7685.59 36938.47 00:16:12.387 =================================================================================================================== 00:16:12.387 Total : 3506.38 13.70 0.00 0.00 36431.12 7685.59 36938.47 00:16:12.387 0 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:12.387 nvmf_trace.0 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85521 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85521 ']' 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85521 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85521 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:12.387 killing process with pid 85521 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85521' 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85521 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85521 00:16:12.387 Received shutdown signal, test time was about 10.000000 seconds 00:16:12.387 00:16:12.387 Latency(us) 00:16:12.387 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.387 =================================================================================================================== 00:16:12.387 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:12.387 [2024-07-15 20:32:31.810756] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:12.387 20:32:31 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:16:12.387 20:32:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:12.388 rmmod nvme_tcp 00:16:12.388 rmmod nvme_fabrics 00:16:12.388 rmmod nvme_keyring 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 85463 ']' 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 85463 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 85463 ']' 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 85463 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85463 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:12.388 killing process with pid 85463 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85463' 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 85463 00:16:12.388 [2024-07-15 20:32:32.081094] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 85463 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:16:12.388 00:16:12.388 real 0m14.356s 00:16:12.388 user 0m19.661s 00:16:12.388 sys 0m5.746s 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:12.388 20:32:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:12.388 ************************************ 00:16:12.388 END TEST nvmf_fips 00:16:12.388 ************************************ 00:16:12.388 20:32:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:12.388 20:32:32 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:16:12.388 20:32:32 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:16:12.388 20:32:32 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:16:12.388 20:32:32 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:12.388 20:32:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:12.388 20:32:32 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:16:12.388 20:32:32 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:12.388 20:32:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:12.388 20:32:32 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:16:12.388 20:32:32 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:12.388 20:32:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:12.388 20:32:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.388 20:32:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:12.388 ************************************ 00:16:12.388 START TEST nvmf_multicontroller 00:16:12.388 ************************************ 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:16:12.388 * Looking for test storage... 00:16:12.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.388 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:12.389 Cannot find device "nvmf_tgt_br" 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:12.389 Cannot find device "nvmf_tgt_br2" 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:12.389 Cannot find device "nvmf_tgt_br" 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:12.389 Cannot find device "nvmf_tgt_br2" 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:12.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:12.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:12.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:12.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:16:12.389 00:16:12.389 --- 10.0.0.2 ping statistics --- 00:16:12.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.389 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:12.389 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:12.389 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:16:12.389 00:16:12.389 --- 10.0.0.3 ping statistics --- 00:16:12.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.389 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:12.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:12.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:16:12.389 00:16:12.389 --- 10.0.0.1 ping statistics --- 00:16:12.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.389 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=85890 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 85890 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 85890 ']' 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:12.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:12.389 20:32:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:12.389 [2024-07-15 20:32:32.904348] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:16:12.389 [2024-07-15 20:32:32.904453] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.389 [2024-07-15 20:32:33.047339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:12.389 [2024-07-15 20:32:33.117890] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:12.389 [2024-07-15 20:32:33.117950] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:12.389 [2024-07-15 20:32:33.117963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:12.389 [2024-07-15 20:32:33.117973] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:12.389 [2024-07-15 20:32:33.117982] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:12.389 [2024-07-15 20:32:33.118522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:12.389 [2024-07-15 20:32:33.118620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:12.389 [2024-07-15 20:32:33.118629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:12.648 20:32:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:12.648 20:32:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:16:12.648 20:32:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:12.648 20:32:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:12.648 20:32:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:12.648 20:32:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.648 20:32:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:12.648 20:32:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.648 20:32:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:12.648 [2024-07-15 20:32:34.003418] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.648 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.648 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:12.648 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.648 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:12.648 Malloc0 00:16:12.648 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.648 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:12.648 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.648 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:12.648 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.648 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:12.648 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.648 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:12.648 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.648 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:12.648 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.648 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:12.648 [2024-07-15 20:32:34.056424] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.648 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.648 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:12.649 [2024-07-15 20:32:34.064349] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:12.649 Malloc1 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:12.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=85948 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 85948 /var/tmp/bdevperf.sock 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 85948 ']' 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:12.649 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:13.215 NVMe0n1 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.215 1 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:13.215 2024/07/15 20:32:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:13.215 request: 00:16:13.215 { 00:16:13.215 "method": "bdev_nvme_attach_controller", 00:16:13.215 "params": { 00:16:13.215 "name": "NVMe0", 00:16:13.215 "trtype": "tcp", 00:16:13.215 "traddr": "10.0.0.2", 00:16:13.215 "adrfam": "ipv4", 00:16:13.215 "trsvcid": "4420", 00:16:13.215 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.215 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:16:13.215 "hostaddr": "10.0.0.2", 00:16:13.215 "hostsvcid": "60000", 00:16:13.215 "prchk_reftag": false, 00:16:13.215 "prchk_guard": false, 00:16:13.215 "hdgst": false, 00:16:13.215 "ddgst": false 00:16:13.215 } 00:16:13.215 } 00:16:13.215 Got JSON-RPC error response 00:16:13.215 GoRPCClient: error on JSON-RPC call 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:13.215 2024/07/15 20:32:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:13.215 request: 00:16:13.215 { 00:16:13.215 "method": "bdev_nvme_attach_controller", 00:16:13.215 "params": { 00:16:13.215 "name": "NVMe0", 00:16:13.215 "trtype": "tcp", 00:16:13.215 "traddr": "10.0.0.2", 00:16:13.215 "adrfam": "ipv4", 00:16:13.215 "trsvcid": "4420", 00:16:13.215 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:13.215 "hostaddr": "10.0.0.2", 00:16:13.215 "hostsvcid": "60000", 00:16:13.215 "prchk_reftag": false, 00:16:13.215 "prchk_guard": false, 00:16:13.215 "hdgst": false, 00:16:13.215 "ddgst": false 00:16:13.215 } 00:16:13.215 } 00:16:13.215 Got JSON-RPC error response 00:16:13.215 GoRPCClient: error on JSON-RPC call 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:13.215 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:13.216 2024/07/15 20:32:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:16:13.216 request: 00:16:13.216 { 00:16:13.216 "method": "bdev_nvme_attach_controller", 00:16:13.216 "params": { 00:16:13.216 "name": "NVMe0", 00:16:13.216 "trtype": "tcp", 00:16:13.216 "traddr": "10.0.0.2", 00:16:13.216 "adrfam": "ipv4", 00:16:13.216 "trsvcid": "4420", 00:16:13.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.216 "hostaddr": "10.0.0.2", 00:16:13.216 "hostsvcid": "60000", 00:16:13.216 "prchk_reftag": false, 00:16:13.216 "prchk_guard": false, 00:16:13.216 "hdgst": false, 00:16:13.216 "ddgst": false, 00:16:13.216 "multipath": "disable" 00:16:13.216 } 00:16:13.216 } 00:16:13.216 Got JSON-RPC error response 00:16:13.216 GoRPCClient: error on JSON-RPC call 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:13.216 2024/07/15 20:32:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:16:13.216 request: 00:16:13.216 { 00:16:13.216 "method": "bdev_nvme_attach_controller", 00:16:13.216 "params": { 00:16:13.216 "name": "NVMe0", 00:16:13.216 "trtype": "tcp", 00:16:13.216 "traddr": "10.0.0.2", 00:16:13.216 "adrfam": "ipv4", 00:16:13.216 "trsvcid": "4420", 00:16:13.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.216 "hostaddr": "10.0.0.2", 00:16:13.216 "hostsvcid": "60000", 00:16:13.216 "prchk_reftag": false, 00:16:13.216 "prchk_guard": false, 00:16:13.216 "hdgst": false, 00:16:13.216 "ddgst": false, 00:16:13.216 "multipath": "failover" 00:16:13.216 } 00:16:13.216 } 00:16:13.216 Got JSON-RPC error response 00:16:13.216 GoRPCClient: error on JSON-RPC call 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:13.216 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.216 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:13.473 00:16:13.473 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.473 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:13.473 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:16:13.473 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.473 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:13.473 20:32:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.473 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:16:13.473 20:32:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:14.921 0 00:16:14.921 20:32:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:16:14.921 20:32:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.921 20:32:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:14.921 20:32:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.921 20:32:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 85948 00:16:14.921 20:32:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 85948 ']' 00:16:14.921 20:32:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 85948 00:16:14.921 20:32:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:16:14.921 20:32:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:14.921 20:32:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85948 00:16:14.921 killing process with pid 85948 00:16:14.921 20:32:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:14.921 20:32:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:14.921 20:32:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85948' 00:16:14.921 20:32:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 85948 00:16:14.921 20:32:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 85948 00:16:14.921 20:32:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.921 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.921 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:14.921 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.921 20:32:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:14.921 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.921 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:14.921 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.921 20:32:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:16:14.921 20:32:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:14.921 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:16:14.921 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:16:14.921 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:16:14.921 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:16:14.921 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:16:14.921 [2024-07-15 20:32:34.175040] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:16:14.921 [2024-07-15 20:32:34.175171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85948 ] 00:16:14.921 [2024-07-15 20:32:34.315955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.921 [2024-07-15 20:32:34.384314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.921 [2024-07-15 20:32:34.763824] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 50453a96-9dbb-4614-b9ad-a26b17a66eb8 already exists 00:16:14.921 [2024-07-15 20:32:34.763900] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:50453a96-9dbb-4614-b9ad-a26b17a66eb8 alias for bdev NVMe1n1 00:16:14.921 [2024-07-15 20:32:34.763924] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:16:14.921 Running I/O for 1 seconds... 00:16:14.921 00:16:14.921 Latency(us) 00:16:14.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.921 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:16:14.921 NVMe0n1 : 1.01 19316.30 75.45 0.00 0.00 6608.11 2129.92 14120.03 00:16:14.921 =================================================================================================================== 00:16:14.922 Total : 19316.30 75.45 0.00 0.00 6608.11 2129.92 14120.03 00:16:14.922 Received shutdown signal, test time was about 1.000000 seconds 00:16:14.922 00:16:14.922 Latency(us) 00:16:14.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.922 =================================================================================================================== 00:16:14.922 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:14.922 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.922 rmmod nvme_tcp 00:16:14.922 rmmod nvme_fabrics 00:16:14.922 rmmod nvme_keyring 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 85890 ']' 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 85890 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 85890 ']' 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 85890 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85890 00:16:14.922 killing process with pid 85890 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85890' 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 85890 00:16:14.922 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 85890 00:16:15.181 20:32:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:15.181 20:32:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:15.181 20:32:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:15.181 20:32:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:15.181 20:32:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:15.181 20:32:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.181 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.181 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.181 20:32:36 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:15.181 00:16:15.181 real 0m4.135s 00:16:15.181 user 0m12.483s 00:16:15.181 sys 0m0.900s 00:16:15.181 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:15.181 20:32:36 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:15.181 ************************************ 00:16:15.181 END TEST nvmf_multicontroller 00:16:15.181 ************************************ 00:16:15.181 20:32:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:15.181 20:32:36 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:15.181 20:32:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:15.181 20:32:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.181 20:32:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:15.181 ************************************ 00:16:15.181 START TEST nvmf_aer 00:16:15.181 ************************************ 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:16:15.181 * Looking for test storage... 00:16:15.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.181 20:32:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:15.440 Cannot find device "nvmf_tgt_br" 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:15.440 Cannot find device "nvmf_tgt_br2" 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:15.440 Cannot find device "nvmf_tgt_br" 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:15.440 Cannot find device "nvmf_tgt_br2" 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:15.440 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:15.440 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:15.440 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:15.699 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:15.699 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:15.699 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:15.699 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:15.699 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:15.699 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:15.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:16:15.699 00:16:15.699 --- 10.0.0.2 ping statistics --- 00:16:15.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.699 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:15.699 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:15.699 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:15.699 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:16:15.699 00:16:15.699 --- 10.0.0.3 ping statistics --- 00:16:15.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.699 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:16:15.699 20:32:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:15.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:15.699 00:16:15.699 --- 10.0.0.1 ping statistics --- 00:16:15.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.699 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=86183 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 86183 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 86183 ']' 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:15.699 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:15.699 [2024-07-15 20:32:37.094241] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:16:15.699 [2024-07-15 20:32:37.094341] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.958 [2024-07-15 20:32:37.234391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:15.958 [2024-07-15 20:32:37.297891] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.958 [2024-07-15 20:32:37.297945] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.958 [2024-07-15 20:32:37.297957] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.958 [2024-07-15 20:32:37.297965] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.958 [2024-07-15 20:32:37.297973] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.958 [2024-07-15 20:32:37.298136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.958 [2024-07-15 20:32:37.298952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.958 [2024-07-15 20:32:37.299034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:15.958 [2024-07-15 20:32:37.299040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.958 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:15.958 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:16:15.958 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:15.958 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:15.958 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:15.958 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:15.958 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:15.958 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.958 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:15.958 [2024-07-15 20:32:37.429559] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:15.958 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.958 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:16:15.958 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.958 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:16.215 Malloc0 00:16:16.215 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.215 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:16.216 [2024-07-15 20:32:37.488375] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:16.216 [ 00:16:16.216 { 00:16:16.216 "allow_any_host": true, 00:16:16.216 "hosts": [], 00:16:16.216 "listen_addresses": [], 00:16:16.216 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:16.216 "subtype": "Discovery" 00:16:16.216 }, 00:16:16.216 { 00:16:16.216 "allow_any_host": true, 00:16:16.216 "hosts": [], 00:16:16.216 "listen_addresses": [ 00:16:16.216 { 00:16:16.216 "adrfam": "IPv4", 00:16:16.216 "traddr": "10.0.0.2", 00:16:16.216 "trsvcid": "4420", 00:16:16.216 "trtype": "TCP" 00:16:16.216 } 00:16:16.216 ], 00:16:16.216 "max_cntlid": 65519, 00:16:16.216 "max_namespaces": 2, 00:16:16.216 "min_cntlid": 1, 00:16:16.216 "model_number": "SPDK bdev Controller", 00:16:16.216 "namespaces": [ 00:16:16.216 { 00:16:16.216 "bdev_name": "Malloc0", 00:16:16.216 "name": "Malloc0", 00:16:16.216 "nguid": "F64FFCB102314A03A03C0B227CD3FA9B", 00:16:16.216 "nsid": 1, 00:16:16.216 "uuid": "f64ffcb1-0231-4a03-a03c-0b227cd3fa9b" 00:16:16.216 } 00:16:16.216 ], 00:16:16.216 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:16.216 "serial_number": "SPDK00000000000001", 00:16:16.216 "subtype": "NVMe" 00:16:16.216 } 00:16:16.216 ] 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=86223 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:16:16.216 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:16:16.473 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:16.473 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:16.473 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:16:16.473 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:16:16.473 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.473 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:16.473 Malloc1 00:16:16.473 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.473 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:16:16.473 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.473 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:16.473 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.473 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:16:16.473 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.473 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:16.473 Asynchronous Event Request test 00:16:16.473 Attaching to 10.0.0.2 00:16:16.473 Attached to 10.0.0.2 00:16:16.473 Registering asynchronous event callbacks... 00:16:16.473 Starting namespace attribute notice tests for all controllers... 00:16:16.473 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:16.473 aer_cb - Changed Namespace 00:16:16.473 Cleaning up... 00:16:16.473 [ 00:16:16.473 { 00:16:16.473 "allow_any_host": true, 00:16:16.473 "hosts": [], 00:16:16.473 "listen_addresses": [], 00:16:16.473 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:16.473 "subtype": "Discovery" 00:16:16.473 }, 00:16:16.473 { 00:16:16.473 "allow_any_host": true, 00:16:16.473 "hosts": [], 00:16:16.473 "listen_addresses": [ 00:16:16.473 { 00:16:16.473 "adrfam": "IPv4", 00:16:16.473 "traddr": "10.0.0.2", 00:16:16.473 "trsvcid": "4420", 00:16:16.473 "trtype": "TCP" 00:16:16.473 } 00:16:16.473 ], 00:16:16.473 "max_cntlid": 65519, 00:16:16.473 "max_namespaces": 2, 00:16:16.473 "min_cntlid": 1, 00:16:16.473 "model_number": "SPDK bdev Controller", 00:16:16.473 "namespaces": [ 00:16:16.473 { 00:16:16.473 "bdev_name": "Malloc0", 00:16:16.473 "name": "Malloc0", 00:16:16.474 "nguid": "F64FFCB102314A03A03C0B227CD3FA9B", 00:16:16.474 "nsid": 1, 00:16:16.474 "uuid": "f64ffcb1-0231-4a03-a03c-0b227cd3fa9b" 00:16:16.474 }, 00:16:16.474 { 00:16:16.474 "bdev_name": "Malloc1", 00:16:16.474 "name": "Malloc1", 00:16:16.474 "nguid": "6A1D056FA05C43EEB67AE8D110E5D145", 00:16:16.474 "nsid": 2, 00:16:16.474 "uuid": "6a1d056f-a05c-43ee-b67a-e8d110e5d145" 00:16:16.474 } 00:16:16.474 ], 00:16:16.474 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:16.474 "serial_number": "SPDK00000000000001", 00:16:16.474 "subtype": "NVMe" 00:16:16.474 } 00:16:16.474 ] 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 86223 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:16.474 rmmod nvme_tcp 00:16:16.474 rmmod nvme_fabrics 00:16:16.474 rmmod nvme_keyring 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 86183 ']' 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 86183 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 86183 ']' 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 86183 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86183 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86183' 00:16:16.474 killing process with pid 86183 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 86183 00:16:16.474 20:32:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 86183 00:16:16.730 20:32:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:16.730 20:32:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:16.730 20:32:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:16.730 20:32:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:16.730 20:32:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:16.730 20:32:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.730 20:32:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.730 20:32:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.730 20:32:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:16.730 00:16:16.730 real 0m1.574s 00:16:16.730 user 0m3.370s 00:16:16.730 sys 0m0.534s 00:16:16.730 20:32:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:16.730 20:32:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:16.730 ************************************ 00:16:16.730 END TEST nvmf_aer 00:16:16.731 ************************************ 00:16:16.731 20:32:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:16.731 20:32:38 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:16.731 20:32:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:16.731 20:32:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.731 20:32:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:16.731 ************************************ 00:16:16.731 START TEST nvmf_async_init 00:16:16.731 ************************************ 00:16:16.731 20:32:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:16:16.989 * Looking for test storage... 00:16:16.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=fd89816b5fcc4cadb7df3bf0a0e784b0 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:16.989 Cannot find device "nvmf_tgt_br" 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:16.989 Cannot find device "nvmf_tgt_br2" 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:16.989 Cannot find device "nvmf_tgt_br" 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:16.989 Cannot find device "nvmf_tgt_br2" 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:16.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:16.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:16:16.989 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:16.990 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:16.990 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:17.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:16:17.249 00:16:17.249 --- 10.0.0.2 ping statistics --- 00:16:17.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.249 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:17.249 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:17.249 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:16:17.249 00:16:17.249 --- 10.0.0.3 ping statistics --- 00:16:17.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.249 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:17.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:17.249 00:16:17.249 --- 10.0.0.1 ping statistics --- 00:16:17.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.249 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=86393 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 86393 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 86393 ']' 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:17.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:17.249 20:32:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:17.508 [2024-07-15 20:32:38.788711] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:16:17.508 [2024-07-15 20:32:38.788796] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.508 [2024-07-15 20:32:38.926657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.508 [2024-07-15 20:32:38.997860] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.508 [2024-07-15 20:32:38.997935] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.508 [2024-07-15 20:32:38.997946] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.508 [2024-07-15 20:32:38.997954] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.508 [2024-07-15 20:32:38.997961] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.508 [2024-07-15 20:32:38.998002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:18.442 [2024-07-15 20:32:39.857809] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:18.442 null0 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g fd89816b5fcc4cadb7df3bf0a0e784b0 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:18.442 [2024-07-15 20:32:39.897923] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.442 20:32:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:18.700 nvme0n1 00:16:18.700 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.700 20:32:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:18.700 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.700 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:18.700 [ 00:16:18.700 { 00:16:18.700 "aliases": [ 00:16:18.700 "fd89816b-5fcc-4cad-b7df-3bf0a0e784b0" 00:16:18.700 ], 00:16:18.700 "assigned_rate_limits": { 00:16:18.700 "r_mbytes_per_sec": 0, 00:16:18.700 "rw_ios_per_sec": 0, 00:16:18.700 "rw_mbytes_per_sec": 0, 00:16:18.700 "w_mbytes_per_sec": 0 00:16:18.700 }, 00:16:18.700 "block_size": 512, 00:16:18.700 "claimed": false, 00:16:18.700 "driver_specific": { 00:16:18.700 "mp_policy": "active_passive", 00:16:18.700 "nvme": [ 00:16:18.700 { 00:16:18.700 "ctrlr_data": { 00:16:18.700 "ana_reporting": false, 00:16:18.700 "cntlid": 1, 00:16:18.700 "firmware_revision": "24.09", 00:16:18.700 "model_number": "SPDK bdev Controller", 00:16:18.700 "multi_ctrlr": true, 00:16:18.700 "oacs": { 00:16:18.700 "firmware": 0, 00:16:18.700 "format": 0, 00:16:18.700 "ns_manage": 0, 00:16:18.700 "security": 0 00:16:18.700 }, 00:16:18.700 "serial_number": "00000000000000000000", 00:16:18.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:18.700 "vendor_id": "0x8086" 00:16:18.700 }, 00:16:18.700 "ns_data": { 00:16:18.700 "can_share": true, 00:16:18.700 "id": 1 00:16:18.700 }, 00:16:18.700 "trid": { 00:16:18.700 "adrfam": "IPv4", 00:16:18.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:18.700 "traddr": "10.0.0.2", 00:16:18.700 "trsvcid": "4420", 00:16:18.700 "trtype": "TCP" 00:16:18.700 }, 00:16:18.700 "vs": { 00:16:18.700 "nvme_version": "1.3" 00:16:18.700 } 00:16:18.700 } 00:16:18.700 ] 00:16:18.700 }, 00:16:18.700 "memory_domains": [ 00:16:18.700 { 00:16:18.700 "dma_device_id": "system", 00:16:18.700 "dma_device_type": 1 00:16:18.700 } 00:16:18.700 ], 00:16:18.700 "name": "nvme0n1", 00:16:18.700 "num_blocks": 2097152, 00:16:18.700 "product_name": "NVMe disk", 00:16:18.700 "supported_io_types": { 00:16:18.700 "abort": true, 00:16:18.700 "compare": true, 00:16:18.700 "compare_and_write": true, 00:16:18.700 "copy": true, 00:16:18.700 "flush": true, 00:16:18.700 "get_zone_info": false, 00:16:18.700 "nvme_admin": true, 00:16:18.700 "nvme_io": true, 00:16:18.700 "nvme_io_md": false, 00:16:18.700 "nvme_iov_md": false, 00:16:18.700 "read": true, 00:16:18.700 "reset": true, 00:16:18.700 "seek_data": false, 00:16:18.700 "seek_hole": false, 00:16:18.700 "unmap": false, 00:16:18.700 "write": true, 00:16:18.700 "write_zeroes": true, 00:16:18.700 "zcopy": false, 00:16:18.700 "zone_append": false, 00:16:18.700 "zone_management": false 00:16:18.700 }, 00:16:18.700 "uuid": "fd89816b-5fcc-4cad-b7df-3bf0a0e784b0", 00:16:18.700 "zoned": false 00:16:18.700 } 00:16:18.700 ] 00:16:18.700 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.700 20:32:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:16:18.700 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.700 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:18.700 [2024-07-15 20:32:40.166688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:18.700 [2024-07-15 20:32:40.166789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x865a30 (9): Bad file descriptor 00:16:18.958 [2024-07-15 20:32:40.299112] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:18.958 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.958 20:32:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:18.958 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.958 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:18.958 [ 00:16:18.958 { 00:16:18.958 "aliases": [ 00:16:18.958 "fd89816b-5fcc-4cad-b7df-3bf0a0e784b0" 00:16:18.958 ], 00:16:18.958 "assigned_rate_limits": { 00:16:18.958 "r_mbytes_per_sec": 0, 00:16:18.958 "rw_ios_per_sec": 0, 00:16:18.958 "rw_mbytes_per_sec": 0, 00:16:18.958 "w_mbytes_per_sec": 0 00:16:18.958 }, 00:16:18.958 "block_size": 512, 00:16:18.958 "claimed": false, 00:16:18.958 "driver_specific": { 00:16:18.958 "mp_policy": "active_passive", 00:16:18.958 "nvme": [ 00:16:18.958 { 00:16:18.958 "ctrlr_data": { 00:16:18.958 "ana_reporting": false, 00:16:18.958 "cntlid": 2, 00:16:18.958 "firmware_revision": "24.09", 00:16:18.958 "model_number": "SPDK bdev Controller", 00:16:18.958 "multi_ctrlr": true, 00:16:18.958 "oacs": { 00:16:18.958 "firmware": 0, 00:16:18.958 "format": 0, 00:16:18.958 "ns_manage": 0, 00:16:18.958 "security": 0 00:16:18.958 }, 00:16:18.958 "serial_number": "00000000000000000000", 00:16:18.958 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:18.958 "vendor_id": "0x8086" 00:16:18.958 }, 00:16:18.958 "ns_data": { 00:16:18.958 "can_share": true, 00:16:18.959 "id": 1 00:16:18.959 }, 00:16:18.959 "trid": { 00:16:18.959 "adrfam": "IPv4", 00:16:18.959 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:18.959 "traddr": "10.0.0.2", 00:16:18.959 "trsvcid": "4420", 00:16:18.959 "trtype": "TCP" 00:16:18.959 }, 00:16:18.959 "vs": { 00:16:18.959 "nvme_version": "1.3" 00:16:18.959 } 00:16:18.959 } 00:16:18.959 ] 00:16:18.959 }, 00:16:18.959 "memory_domains": [ 00:16:18.959 { 00:16:18.959 "dma_device_id": "system", 00:16:18.959 "dma_device_type": 1 00:16:18.959 } 00:16:18.959 ], 00:16:18.959 "name": "nvme0n1", 00:16:18.959 "num_blocks": 2097152, 00:16:18.959 "product_name": "NVMe disk", 00:16:18.959 "supported_io_types": { 00:16:18.959 "abort": true, 00:16:18.959 "compare": true, 00:16:18.959 "compare_and_write": true, 00:16:18.959 "copy": true, 00:16:18.959 "flush": true, 00:16:18.959 "get_zone_info": false, 00:16:18.959 "nvme_admin": true, 00:16:18.959 "nvme_io": true, 00:16:18.959 "nvme_io_md": false, 00:16:18.959 "nvme_iov_md": false, 00:16:18.959 "read": true, 00:16:18.959 "reset": true, 00:16:18.959 "seek_data": false, 00:16:18.959 "seek_hole": false, 00:16:18.959 "unmap": false, 00:16:18.959 "write": true, 00:16:18.959 "write_zeroes": true, 00:16:18.959 "zcopy": false, 00:16:18.959 "zone_append": false, 00:16:18.959 "zone_management": false 00:16:18.959 }, 00:16:18.959 "uuid": "fd89816b-5fcc-4cad-b7df-3bf0a0e784b0", 00:16:18.959 "zoned": false 00:16:18.959 } 00:16:18.959 ] 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.wT5nGNjwl1 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.wT5nGNjwl1 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:18.959 [2024-07-15 20:32:40.366963] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:18.959 [2024-07-15 20:32:40.367128] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wT5nGNjwl1 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:18.959 [2024-07-15 20:32:40.374949] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wT5nGNjwl1 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:18.959 [2024-07-15 20:32:40.382972] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:18.959 [2024-07-15 20:32:40.383037] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:18.959 nvme0n1 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.959 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:19.217 [ 00:16:19.217 { 00:16:19.217 "aliases": [ 00:16:19.217 "fd89816b-5fcc-4cad-b7df-3bf0a0e784b0" 00:16:19.217 ], 00:16:19.217 "assigned_rate_limits": { 00:16:19.217 "r_mbytes_per_sec": 0, 00:16:19.217 "rw_ios_per_sec": 0, 00:16:19.217 "rw_mbytes_per_sec": 0, 00:16:19.217 "w_mbytes_per_sec": 0 00:16:19.217 }, 00:16:19.217 "block_size": 512, 00:16:19.217 "claimed": false, 00:16:19.217 "driver_specific": { 00:16:19.217 "mp_policy": "active_passive", 00:16:19.217 "nvme": [ 00:16:19.217 { 00:16:19.217 "ctrlr_data": { 00:16:19.217 "ana_reporting": false, 00:16:19.217 "cntlid": 3, 00:16:19.217 "firmware_revision": "24.09", 00:16:19.217 "model_number": "SPDK bdev Controller", 00:16:19.217 "multi_ctrlr": true, 00:16:19.217 "oacs": { 00:16:19.217 "firmware": 0, 00:16:19.217 "format": 0, 00:16:19.217 "ns_manage": 0, 00:16:19.217 "security": 0 00:16:19.217 }, 00:16:19.217 "serial_number": "00000000000000000000", 00:16:19.217 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:19.217 "vendor_id": "0x8086" 00:16:19.217 }, 00:16:19.217 "ns_data": { 00:16:19.217 "can_share": true, 00:16:19.217 "id": 1 00:16:19.217 }, 00:16:19.217 "trid": { 00:16:19.217 "adrfam": "IPv4", 00:16:19.217 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:19.217 "traddr": "10.0.0.2", 00:16:19.217 "trsvcid": "4421", 00:16:19.217 "trtype": "TCP" 00:16:19.217 }, 00:16:19.217 "vs": { 00:16:19.217 "nvme_version": "1.3" 00:16:19.217 } 00:16:19.217 } 00:16:19.217 ] 00:16:19.217 }, 00:16:19.217 "memory_domains": [ 00:16:19.217 { 00:16:19.217 "dma_device_id": "system", 00:16:19.217 "dma_device_type": 1 00:16:19.217 } 00:16:19.217 ], 00:16:19.217 "name": "nvme0n1", 00:16:19.217 "num_blocks": 2097152, 00:16:19.217 "product_name": "NVMe disk", 00:16:19.217 "supported_io_types": { 00:16:19.217 "abort": true, 00:16:19.217 "compare": true, 00:16:19.217 "compare_and_write": true, 00:16:19.217 "copy": true, 00:16:19.217 "flush": true, 00:16:19.217 "get_zone_info": false, 00:16:19.217 "nvme_admin": true, 00:16:19.217 "nvme_io": true, 00:16:19.217 "nvme_io_md": false, 00:16:19.217 "nvme_iov_md": false, 00:16:19.217 "read": true, 00:16:19.217 "reset": true, 00:16:19.217 "seek_data": false, 00:16:19.217 "seek_hole": false, 00:16:19.217 "unmap": false, 00:16:19.217 "write": true, 00:16:19.217 "write_zeroes": true, 00:16:19.217 "zcopy": false, 00:16:19.217 "zone_append": false, 00:16:19.218 "zone_management": false 00:16:19.218 }, 00:16:19.218 "uuid": "fd89816b-5fcc-4cad-b7df-3bf0a0e784b0", 00:16:19.218 "zoned": false 00:16:19.218 } 00:16:19.218 ] 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.wT5nGNjwl1 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:19.218 rmmod nvme_tcp 00:16:19.218 rmmod nvme_fabrics 00:16:19.218 rmmod nvme_keyring 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 86393 ']' 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 86393 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 86393 ']' 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 86393 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86393 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:19.218 killing process with pid 86393 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86393' 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 86393 00:16:19.218 [2024-07-15 20:32:40.638129] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:19.218 [2024-07-15 20:32:40.638164] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:16:19.218 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 86393 00:16:19.475 20:32:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:19.475 20:32:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:19.475 20:32:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:19.475 20:32:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:19.475 20:32:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:19.475 20:32:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.475 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.475 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.475 20:32:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:19.475 00:16:19.475 real 0m2.636s 00:16:19.475 user 0m2.547s 00:16:19.475 sys 0m0.570s 00:16:19.475 ************************************ 00:16:19.475 END TEST nvmf_async_init 00:16:19.475 ************************************ 00:16:19.475 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:19.475 20:32:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:19.475 20:32:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:19.475 20:32:40 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:19.475 20:32:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:19.476 20:32:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:19.476 20:32:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:19.476 ************************************ 00:16:19.476 START TEST dma 00:16:19.476 ************************************ 00:16:19.476 20:32:40 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:16:19.476 * Looking for test storage... 00:16:19.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:19.476 20:32:40 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:19.476 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:16:19.476 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.476 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.476 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.476 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.476 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.476 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.476 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.476 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.476 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.476 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.476 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:16:19.476 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:16:19.476 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.476 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.476 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:19.476 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.476 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:19.735 20:32:40 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.735 20:32:40 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.735 20:32:40 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.735 20:32:40 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.735 20:32:40 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.735 20:32:40 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.735 20:32:40 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:16:19.735 20:32:40 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.735 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:16:19.735 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:19.735 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:19.735 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.735 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.735 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.735 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:19.735 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:19.735 20:32:40 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:19.735 20:32:40 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:16:19.735 20:32:40 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:16:19.735 00:16:19.735 real 0m0.098s 00:16:19.735 user 0m0.044s 00:16:19.735 sys 0m0.062s 00:16:19.735 20:32:40 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:19.735 20:32:40 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:16:19.735 ************************************ 00:16:19.735 END TEST dma 00:16:19.735 ************************************ 00:16:19.735 20:32:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:19.735 20:32:41 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:19.735 20:32:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:19.735 20:32:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:19.735 20:32:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:19.735 ************************************ 00:16:19.735 START TEST nvmf_identify 00:16:19.735 ************************************ 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:16:19.735 * Looking for test storage... 00:16:19.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:19.735 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:19.736 Cannot find device "nvmf_tgt_br" 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:19.736 Cannot find device "nvmf_tgt_br2" 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:19.736 Cannot find device "nvmf_tgt_br" 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:19.736 Cannot find device "nvmf_tgt_br2" 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:16:19.736 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:19.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:19.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:19.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:19.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:16:19.994 00:16:19.994 --- 10.0.0.2 ping statistics --- 00:16:19.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.994 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:19.994 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:19.994 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:19.994 00:16:19.994 --- 10.0.0.3 ping statistics --- 00:16:19.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.994 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:19.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:19.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:16:19.994 00:16:19.994 --- 10.0.0.1 ping statistics --- 00:16:19.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:19.994 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:19.994 20:32:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:20.253 20:32:41 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86660 00:16:20.253 20:32:41 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:20.253 20:32:41 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:20.253 20:32:41 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86660 00:16:20.253 20:32:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 86660 ']' 00:16:20.253 20:32:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.253 20:32:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.253 20:32:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.253 20:32:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.253 20:32:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:20.253 [2024-07-15 20:32:41.556990] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:16:20.253 [2024-07-15 20:32:41.557085] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.253 [2024-07-15 20:32:41.701499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:20.510 [2024-07-15 20:32:41.774452] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.510 [2024-07-15 20:32:41.774506] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.510 [2024-07-15 20:32:41.774520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:20.510 [2024-07-15 20:32:41.774530] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:20.510 [2024-07-15 20:32:41.774539] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.510 [2024-07-15 20:32:41.774810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.510 [2024-07-15 20:32:41.775091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.510 [2024-07-15 20:32:41.775169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:20.510 [2024-07-15 20:32:41.775179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.442 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.442 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:16:21.442 20:32:42 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:21.442 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.442 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:21.442 [2024-07-15 20:32:42.618481] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.442 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.442 20:32:42 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:16:21.442 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:21.442 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:21.442 20:32:42 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:21.442 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.442 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:21.442 Malloc0 00:16:21.442 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.442 20:32:42 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:21.442 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.442 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:21.443 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.443 20:32:42 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:16:21.443 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.443 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:21.443 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.443 20:32:42 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:21.443 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.443 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:21.443 [2024-07-15 20:32:42.720041] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:21.443 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.443 20:32:42 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:21.443 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.443 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:21.443 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.443 20:32:42 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:16:21.443 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.443 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:21.443 [ 00:16:21.443 { 00:16:21.443 "allow_any_host": true, 00:16:21.443 "hosts": [], 00:16:21.443 "listen_addresses": [ 00:16:21.443 { 00:16:21.443 "adrfam": "IPv4", 00:16:21.443 "traddr": "10.0.0.2", 00:16:21.443 "trsvcid": "4420", 00:16:21.443 "trtype": "TCP" 00:16:21.443 } 00:16:21.443 ], 00:16:21.443 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:21.443 "subtype": "Discovery" 00:16:21.443 }, 00:16:21.443 { 00:16:21.443 "allow_any_host": true, 00:16:21.443 "hosts": [], 00:16:21.443 "listen_addresses": [ 00:16:21.443 { 00:16:21.443 "adrfam": "IPv4", 00:16:21.443 "traddr": "10.0.0.2", 00:16:21.443 "trsvcid": "4420", 00:16:21.443 "trtype": "TCP" 00:16:21.443 } 00:16:21.443 ], 00:16:21.443 "max_cntlid": 65519, 00:16:21.443 "max_namespaces": 32, 00:16:21.443 "min_cntlid": 1, 00:16:21.443 "model_number": "SPDK bdev Controller", 00:16:21.443 "namespaces": [ 00:16:21.443 { 00:16:21.443 "bdev_name": "Malloc0", 00:16:21.443 "eui64": "ABCDEF0123456789", 00:16:21.443 "name": "Malloc0", 00:16:21.443 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:16:21.443 "nsid": 1, 00:16:21.443 "uuid": "490826f6-6575-4053-834d-b25d1179759a" 00:16:21.443 } 00:16:21.443 ], 00:16:21.443 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.443 "serial_number": "SPDK00000000000001", 00:16:21.443 "subtype": "NVMe" 00:16:21.443 } 00:16:21.443 ] 00:16:21.443 20:32:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.443 20:32:42 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:16:21.443 [2024-07-15 20:32:42.771687] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:16:21.443 [2024-07-15 20:32:42.771744] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86713 ] 00:16:21.443 [2024-07-15 20:32:42.919219] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:16:21.443 [2024-07-15 20:32:42.919293] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:21.443 [2024-07-15 20:32:42.919301] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:21.443 [2024-07-15 20:32:42.919318] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:21.443 [2024-07-15 20:32:42.919326] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:21.443 [2024-07-15 20:32:42.919483] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:16:21.443 [2024-07-15 20:32:42.919537] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x156aa60 0 00:16:21.443 [2024-07-15 20:32:42.931900] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:21.443 [2024-07-15 20:32:42.931932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:21.443 [2024-07-15 20:32:42.931939] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:21.443 [2024-07-15 20:32:42.931943] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:21.443 [2024-07-15 20:32:42.931993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.443 [2024-07-15 20:32:42.932002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.443 [2024-07-15 20:32:42.932006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x156aa60) 00:16:21.443 [2024-07-15 20:32:42.932023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:21.443 [2024-07-15 20:32:42.932059] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ad840, cid 0, qid 0 00:16:21.443 [2024-07-15 20:32:42.939892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.443 [2024-07-15 20:32:42.939918] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.443 [2024-07-15 20:32:42.939924] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.443 [2024-07-15 20:32:42.939930] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ad840) on tqpair=0x156aa60 00:16:21.443 [2024-07-15 20:32:42.939944] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:21.443 [2024-07-15 20:32:42.939954] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:16:21.443 [2024-07-15 20:32:42.939961] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:16:21.443 [2024-07-15 20:32:42.939980] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.443 [2024-07-15 20:32:42.939986] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.443 [2024-07-15 20:32:42.939991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x156aa60) 00:16:21.443 [2024-07-15 20:32:42.940001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.443 [2024-07-15 20:32:42.940033] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ad840, cid 0, qid 0 00:16:21.443 [2024-07-15 20:32:42.940124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.443 [2024-07-15 20:32:42.940132] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.443 [2024-07-15 20:32:42.940136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.443 [2024-07-15 20:32:42.940141] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ad840) on tqpair=0x156aa60 00:16:21.443 [2024-07-15 20:32:42.940147] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:16:21.443 [2024-07-15 20:32:42.940155] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:16:21.443 [2024-07-15 20:32:42.940164] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.443 [2024-07-15 20:32:42.940169] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.443 [2024-07-15 20:32:42.940173] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x156aa60) 00:16:21.443 [2024-07-15 20:32:42.940181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.443 [2024-07-15 20:32:42.940202] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ad840, cid 0, qid 0 00:16:21.443 [2024-07-15 20:32:42.940265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.443 [2024-07-15 20:32:42.940272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.443 [2024-07-15 20:32:42.940277] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.443 [2024-07-15 20:32:42.940281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ad840) on tqpair=0x156aa60 00:16:21.443 [2024-07-15 20:32:42.940288] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:16:21.443 [2024-07-15 20:32:42.940297] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:16:21.443 [2024-07-15 20:32:42.940306] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.443 [2024-07-15 20:32:42.940310] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.443 [2024-07-15 20:32:42.940315] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x156aa60) 00:16:21.443 [2024-07-15 20:32:42.940323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.443 [2024-07-15 20:32:42.940343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ad840, cid 0, qid 0 00:16:21.443 [2024-07-15 20:32:42.940399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.443 [2024-07-15 20:32:42.940407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.443 [2024-07-15 20:32:42.940411] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.443 [2024-07-15 20:32:42.940415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ad840) on tqpair=0x156aa60 00:16:21.443 [2024-07-15 20:32:42.940421] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:21.443 [2024-07-15 20:32:42.940432] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.443 [2024-07-15 20:32:42.940438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.443 [2024-07-15 20:32:42.940442] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x156aa60) 00:16:21.443 [2024-07-15 20:32:42.940450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.443 [2024-07-15 20:32:42.940469] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ad840, cid 0, qid 0 00:16:21.443 [2024-07-15 20:32:42.940537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.443 [2024-07-15 20:32:42.940545] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.443 [2024-07-15 20:32:42.940549] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.443 [2024-07-15 20:32:42.940553] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ad840) on tqpair=0x156aa60 00:16:21.443 [2024-07-15 20:32:42.940558] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:16:21.443 [2024-07-15 20:32:42.940564] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:16:21.443 [2024-07-15 20:32:42.940573] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:21.443 [2024-07-15 20:32:42.940680] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:16:21.443 [2024-07-15 20:32:42.940695] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:21.443 [2024-07-15 20:32:42.940707] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.443 [2024-07-15 20:32:42.940712] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.940716] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x156aa60) 00:16:21.444 [2024-07-15 20:32:42.940724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.444 [2024-07-15 20:32:42.940748] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ad840, cid 0, qid 0 00:16:21.444 [2024-07-15 20:32:42.940812] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.444 [2024-07-15 20:32:42.940826] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.444 [2024-07-15 20:32:42.940831] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.940836] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ad840) on tqpair=0x156aa60 00:16:21.444 [2024-07-15 20:32:42.940842] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:21.444 [2024-07-15 20:32:42.940853] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.940859] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.940863] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x156aa60) 00:16:21.444 [2024-07-15 20:32:42.940882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.444 [2024-07-15 20:32:42.940906] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ad840, cid 0, qid 0 00:16:21.444 [2024-07-15 20:32:42.940977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.444 [2024-07-15 20:32:42.940984] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.444 [2024-07-15 20:32:42.940988] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.940992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ad840) on tqpair=0x156aa60 00:16:21.444 [2024-07-15 20:32:42.940998] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:21.444 [2024-07-15 20:32:42.941004] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:16:21.444 [2024-07-15 20:32:42.941013] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:16:21.444 [2024-07-15 20:32:42.941024] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:16:21.444 [2024-07-15 20:32:42.941037] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941042] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x156aa60) 00:16:21.444 [2024-07-15 20:32:42.941054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.444 [2024-07-15 20:32:42.941074] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ad840, cid 0, qid 0 00:16:21.444 [2024-07-15 20:32:42.941175] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:21.444 [2024-07-15 20:32:42.941183] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:21.444 [2024-07-15 20:32:42.941187] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941192] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x156aa60): datao=0, datal=4096, cccid=0 00:16:21.444 [2024-07-15 20:32:42.941197] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ad840) on tqpair(0x156aa60): expected_datao=0, payload_size=4096 00:16:21.444 [2024-07-15 20:32:42.941202] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941211] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941216] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941225] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.444 [2024-07-15 20:32:42.941232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.444 [2024-07-15 20:32:42.941236] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941241] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ad840) on tqpair=0x156aa60 00:16:21.444 [2024-07-15 20:32:42.941251] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:16:21.444 [2024-07-15 20:32:42.941257] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:16:21.444 [2024-07-15 20:32:42.941262] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:16:21.444 [2024-07-15 20:32:42.941268] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:16:21.444 [2024-07-15 20:32:42.941274] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:16:21.444 [2024-07-15 20:32:42.941279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:16:21.444 [2024-07-15 20:32:42.941288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:16:21.444 [2024-07-15 20:32:42.941297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941302] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941306] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x156aa60) 00:16:21.444 [2024-07-15 20:32:42.941314] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:21.444 [2024-07-15 20:32:42.941335] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ad840, cid 0, qid 0 00:16:21.444 [2024-07-15 20:32:42.941403] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.444 [2024-07-15 20:32:42.941411] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.444 [2024-07-15 20:32:42.941415] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941419] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ad840) on tqpair=0x156aa60 00:16:21.444 [2024-07-15 20:32:42.941428] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941432] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941436] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x156aa60) 00:16:21.444 [2024-07-15 20:32:42.941444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.444 [2024-07-15 20:32:42.941450] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941459] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x156aa60) 00:16:21.444 [2024-07-15 20:32:42.941465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.444 [2024-07-15 20:32:42.941472] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941476] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x156aa60) 00:16:21.444 [2024-07-15 20:32:42.941486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.444 [2024-07-15 20:32:42.941493] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941497] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941501] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156aa60) 00:16:21.444 [2024-07-15 20:32:42.941508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.444 [2024-07-15 20:32:42.941513] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:16:21.444 [2024-07-15 20:32:42.941527] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:21.444 [2024-07-15 20:32:42.941536] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941541] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x156aa60) 00:16:21.444 [2024-07-15 20:32:42.941549] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.444 [2024-07-15 20:32:42.941570] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ad840, cid 0, qid 0 00:16:21.444 [2024-07-15 20:32:42.941579] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ad9c0, cid 1, qid 0 00:16:21.444 [2024-07-15 20:32:42.941584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15adb40, cid 2, qid 0 00:16:21.444 [2024-07-15 20:32:42.941589] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15adcc0, cid 3, qid 0 00:16:21.444 [2024-07-15 20:32:42.941594] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ade40, cid 4, qid 0 00:16:21.444 [2024-07-15 20:32:42.941701] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.444 [2024-07-15 20:32:42.941724] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.444 [2024-07-15 20:32:42.941729] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941734] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ade40) on tqpair=0x156aa60 00:16:21.444 [2024-07-15 20:32:42.941740] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:16:21.444 [2024-07-15 20:32:42.941750] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:16:21.444 [2024-07-15 20:32:42.941764] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941769] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x156aa60) 00:16:21.444 [2024-07-15 20:32:42.941777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.444 [2024-07-15 20:32:42.941798] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ade40, cid 4, qid 0 00:16:21.444 [2024-07-15 20:32:42.941891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:21.444 [2024-07-15 20:32:42.941901] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:21.444 [2024-07-15 20:32:42.941905] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941910] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x156aa60): datao=0, datal=4096, cccid=4 00:16:21.444 [2024-07-15 20:32:42.941915] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ade40) on tqpair(0x156aa60): expected_datao=0, payload_size=4096 00:16:21.444 [2024-07-15 20:32:42.941920] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941928] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941933] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941943] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.444 [2024-07-15 20:32:42.941949] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.444 [2024-07-15 20:32:42.941953] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.941958] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ade40) on tqpair=0x156aa60 00:16:21.444 [2024-07-15 20:32:42.941973] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:16:21.444 [2024-07-15 20:32:42.942006] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.444 [2024-07-15 20:32:42.942013] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x156aa60) 00:16:21.444 [2024-07-15 20:32:42.942021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.445 [2024-07-15 20:32:42.942030] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.445 [2024-07-15 20:32:42.942034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.445 [2024-07-15 20:32:42.942038] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x156aa60) 00:16:21.703 [2024-07-15 20:32:42.942045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.703 [2024-07-15 20:32:42.942073] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ade40, cid 4, qid 0 00:16:21.703 [2024-07-15 20:32:42.942082] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15adfc0, cid 5, qid 0 00:16:21.703 [2024-07-15 20:32:42.942188] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:21.703 [2024-07-15 20:32:42.942201] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:21.703 [2024-07-15 20:32:42.942206] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:21.703 [2024-07-15 20:32:42.942210] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x156aa60): datao=0, datal=1024, cccid=4 00:16:21.703 [2024-07-15 20:32:42.942215] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ade40) on tqpair(0x156aa60): expected_datao=0, payload_size=1024 00:16:21.703 [2024-07-15 20:32:42.942220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.703 [2024-07-15 20:32:42.942228] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:21.703 [2024-07-15 20:32:42.942232] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:21.703 [2024-07-15 20:32:42.942238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.703 [2024-07-15 20:32:42.942244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.703 [2024-07-15 20:32:42.942248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.703 [2024-07-15 20:32:42.942253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15adfc0) on tqpair=0x156aa60 00:16:21.703 [2024-07-15 20:32:42.987899] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.703 [2024-07-15 20:32:42.987940] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.703 [2024-07-15 20:32:42.987946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.703 [2024-07-15 20:32:42.987952] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ade40) on tqpair=0x156aa60 00:16:21.703 [2024-07-15 20:32:42.987980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.703 [2024-07-15 20:32:42.987987] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x156aa60) 00:16:21.703 [2024-07-15 20:32:42.988000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.703 [2024-07-15 20:32:42.988042] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ade40, cid 4, qid 0 00:16:21.703 [2024-07-15 20:32:42.988174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:21.704 [2024-07-15 20:32:42.988182] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:21.704 [2024-07-15 20:32:42.988186] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:21.704 [2024-07-15 20:32:42.988190] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x156aa60): datao=0, datal=3072, cccid=4 00:16:21.704 [2024-07-15 20:32:42.988196] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ade40) on tqpair(0x156aa60): expected_datao=0, payload_size=3072 00:16:21.704 [2024-07-15 20:32:42.988201] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.704 [2024-07-15 20:32:42.988210] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:21.704 [2024-07-15 20:32:42.988215] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:21.704 [2024-07-15 20:32:42.988225] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.704 [2024-07-15 20:32:42.988231] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.704 [2024-07-15 20:32:42.988235] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.704 [2024-07-15 20:32:42.988240] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ade40) on tqpair=0x156aa60 00:16:21.704 [2024-07-15 20:32:42.988251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.704 [2024-07-15 20:32:42.988256] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x156aa60) 00:16:21.704 [2024-07-15 20:32:42.988264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.704 [2024-07-15 20:32:42.988292] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15ade40, cid 4, qid 0 00:16:21.704 [2024-07-15 20:32:42.988384] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:21.704 [2024-07-15 20:32:42.988391] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:21.704 [2024-07-15 20:32:42.988395] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:21.704 [2024-07-15 20:32:42.988399] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x156aa60): datao=0, datal=8, cccid=4 00:16:21.704 [2024-07-15 20:32:42.988404] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15ade40) on tqpair(0x156aa60): expected_datao=0, payload_size=8 00:16:21.704 [2024-07-15 20:32:42.988409] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.704 [2024-07-15 20:32:42.988417] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:21.704 [2024-07-15 20:32:42.988421] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:21.704 [2024-07-15 20:32:43.030013] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.704 [2024-07-15 20:32:43.030061] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.704 [2024-07-15 20:32:43.030067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.704 [2024-07-15 20:32:43.030074] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ade40) on tqpair=0x156aa60 00:16:21.704 ===================================================== 00:16:21.704 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:21.704 ===================================================== 00:16:21.704 Controller Capabilities/Features 00:16:21.704 ================================ 00:16:21.704 Vendor ID: 0000 00:16:21.704 Subsystem Vendor ID: 0000 00:16:21.704 Serial Number: .................... 00:16:21.704 Model Number: ........................................ 00:16:21.704 Firmware Version: 24.09 00:16:21.704 Recommended Arb Burst: 0 00:16:21.704 IEEE OUI Identifier: 00 00 00 00:16:21.704 Multi-path I/O 00:16:21.704 May have multiple subsystem ports: No 00:16:21.704 May have multiple controllers: No 00:16:21.704 Associated with SR-IOV VF: No 00:16:21.704 Max Data Transfer Size: 131072 00:16:21.704 Max Number of Namespaces: 0 00:16:21.704 Max Number of I/O Queues: 1024 00:16:21.704 NVMe Specification Version (VS): 1.3 00:16:21.704 NVMe Specification Version (Identify): 1.3 00:16:21.704 Maximum Queue Entries: 128 00:16:21.704 Contiguous Queues Required: Yes 00:16:21.704 Arbitration Mechanisms Supported 00:16:21.704 Weighted Round Robin: Not Supported 00:16:21.704 Vendor Specific: Not Supported 00:16:21.704 Reset Timeout: 15000 ms 00:16:21.704 Doorbell Stride: 4 bytes 00:16:21.704 NVM Subsystem Reset: Not Supported 00:16:21.704 Command Sets Supported 00:16:21.704 NVM Command Set: Supported 00:16:21.704 Boot Partition: Not Supported 00:16:21.704 Memory Page Size Minimum: 4096 bytes 00:16:21.704 Memory Page Size Maximum: 4096 bytes 00:16:21.704 Persistent Memory Region: Not Supported 00:16:21.704 Optional Asynchronous Events Supported 00:16:21.704 Namespace Attribute Notices: Not Supported 00:16:21.704 Firmware Activation Notices: Not Supported 00:16:21.704 ANA Change Notices: Not Supported 00:16:21.704 PLE Aggregate Log Change Notices: Not Supported 00:16:21.704 LBA Status Info Alert Notices: Not Supported 00:16:21.704 EGE Aggregate Log Change Notices: Not Supported 00:16:21.704 Normal NVM Subsystem Shutdown event: Not Supported 00:16:21.704 Zone Descriptor Change Notices: Not Supported 00:16:21.704 Discovery Log Change Notices: Supported 00:16:21.704 Controller Attributes 00:16:21.704 128-bit Host Identifier: Not Supported 00:16:21.704 Non-Operational Permissive Mode: Not Supported 00:16:21.704 NVM Sets: Not Supported 00:16:21.704 Read Recovery Levels: Not Supported 00:16:21.704 Endurance Groups: Not Supported 00:16:21.704 Predictable Latency Mode: Not Supported 00:16:21.704 Traffic Based Keep ALive: Not Supported 00:16:21.704 Namespace Granularity: Not Supported 00:16:21.704 SQ Associations: Not Supported 00:16:21.704 UUID List: Not Supported 00:16:21.704 Multi-Domain Subsystem: Not Supported 00:16:21.704 Fixed Capacity Management: Not Supported 00:16:21.704 Variable Capacity Management: Not Supported 00:16:21.704 Delete Endurance Group: Not Supported 00:16:21.704 Delete NVM Set: Not Supported 00:16:21.704 Extended LBA Formats Supported: Not Supported 00:16:21.704 Flexible Data Placement Supported: Not Supported 00:16:21.704 00:16:21.704 Controller Memory Buffer Support 00:16:21.704 ================================ 00:16:21.704 Supported: No 00:16:21.704 00:16:21.704 Persistent Memory Region Support 00:16:21.704 ================================ 00:16:21.704 Supported: No 00:16:21.704 00:16:21.704 Admin Command Set Attributes 00:16:21.704 ============================ 00:16:21.704 Security Send/Receive: Not Supported 00:16:21.704 Format NVM: Not Supported 00:16:21.704 Firmware Activate/Download: Not Supported 00:16:21.704 Namespace Management: Not Supported 00:16:21.704 Device Self-Test: Not Supported 00:16:21.704 Directives: Not Supported 00:16:21.704 NVMe-MI: Not Supported 00:16:21.704 Virtualization Management: Not Supported 00:16:21.704 Doorbell Buffer Config: Not Supported 00:16:21.704 Get LBA Status Capability: Not Supported 00:16:21.704 Command & Feature Lockdown Capability: Not Supported 00:16:21.704 Abort Command Limit: 1 00:16:21.704 Async Event Request Limit: 4 00:16:21.704 Number of Firmware Slots: N/A 00:16:21.704 Firmware Slot 1 Read-Only: N/A 00:16:21.704 Firmware Activation Without Reset: N/A 00:16:21.704 Multiple Update Detection Support: N/A 00:16:21.704 Firmware Update Granularity: No Information Provided 00:16:21.704 Per-Namespace SMART Log: No 00:16:21.704 Asymmetric Namespace Access Log Page: Not Supported 00:16:21.704 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:21.704 Command Effects Log Page: Not Supported 00:16:21.704 Get Log Page Extended Data: Supported 00:16:21.704 Telemetry Log Pages: Not Supported 00:16:21.704 Persistent Event Log Pages: Not Supported 00:16:21.704 Supported Log Pages Log Page: May Support 00:16:21.704 Commands Supported & Effects Log Page: Not Supported 00:16:21.704 Feature Identifiers & Effects Log Page:May Support 00:16:21.704 NVMe-MI Commands & Effects Log Page: May Support 00:16:21.704 Data Area 4 for Telemetry Log: Not Supported 00:16:21.704 Error Log Page Entries Supported: 128 00:16:21.704 Keep Alive: Not Supported 00:16:21.704 00:16:21.704 NVM Command Set Attributes 00:16:21.704 ========================== 00:16:21.704 Submission Queue Entry Size 00:16:21.704 Max: 1 00:16:21.704 Min: 1 00:16:21.704 Completion Queue Entry Size 00:16:21.704 Max: 1 00:16:21.704 Min: 1 00:16:21.704 Number of Namespaces: 0 00:16:21.704 Compare Command: Not Supported 00:16:21.704 Write Uncorrectable Command: Not Supported 00:16:21.704 Dataset Management Command: Not Supported 00:16:21.704 Write Zeroes Command: Not Supported 00:16:21.704 Set Features Save Field: Not Supported 00:16:21.704 Reservations: Not Supported 00:16:21.704 Timestamp: Not Supported 00:16:21.704 Copy: Not Supported 00:16:21.704 Volatile Write Cache: Not Present 00:16:21.704 Atomic Write Unit (Normal): 1 00:16:21.704 Atomic Write Unit (PFail): 1 00:16:21.704 Atomic Compare & Write Unit: 1 00:16:21.704 Fused Compare & Write: Supported 00:16:21.704 Scatter-Gather List 00:16:21.704 SGL Command Set: Supported 00:16:21.704 SGL Keyed: Supported 00:16:21.704 SGL Bit Bucket Descriptor: Not Supported 00:16:21.704 SGL Metadata Pointer: Not Supported 00:16:21.704 Oversized SGL: Not Supported 00:16:21.704 SGL Metadata Address: Not Supported 00:16:21.704 SGL Offset: Supported 00:16:21.704 Transport SGL Data Block: Not Supported 00:16:21.704 Replay Protected Memory Block: Not Supported 00:16:21.704 00:16:21.704 Firmware Slot Information 00:16:21.704 ========================= 00:16:21.704 Active slot: 0 00:16:21.704 00:16:21.704 00:16:21.704 Error Log 00:16:21.704 ========= 00:16:21.704 00:16:21.704 Active Namespaces 00:16:21.704 ================= 00:16:21.704 Discovery Log Page 00:16:21.704 ================== 00:16:21.704 Generation Counter: 2 00:16:21.704 Number of Records: 2 00:16:21.704 Record Format: 0 00:16:21.704 00:16:21.704 Discovery Log Entry 0 00:16:21.704 ---------------------- 00:16:21.705 Transport Type: 3 (TCP) 00:16:21.705 Address Family: 1 (IPv4) 00:16:21.705 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:21.705 Entry Flags: 00:16:21.705 Duplicate Returned Information: 1 00:16:21.705 Explicit Persistent Connection Support for Discovery: 1 00:16:21.705 Transport Requirements: 00:16:21.705 Secure Channel: Not Required 00:16:21.705 Port ID: 0 (0x0000) 00:16:21.705 Controller ID: 65535 (0xffff) 00:16:21.705 Admin Max SQ Size: 128 00:16:21.705 Transport Service Identifier: 4420 00:16:21.705 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:21.705 Transport Address: 10.0.0.2 00:16:21.705 Discovery Log Entry 1 00:16:21.705 ---------------------- 00:16:21.705 Transport Type: 3 (TCP) 00:16:21.705 Address Family: 1 (IPv4) 00:16:21.705 Subsystem Type: 2 (NVM Subsystem) 00:16:21.705 Entry Flags: 00:16:21.705 Duplicate Returned Information: 0 00:16:21.705 Explicit Persistent Connection Support for Discovery: 0 00:16:21.705 Transport Requirements: 00:16:21.705 Secure Channel: Not Required 00:16:21.705 Port ID: 0 (0x0000) 00:16:21.705 Controller ID: 65535 (0xffff) 00:16:21.705 Admin Max SQ Size: 128 00:16:21.705 Transport Service Identifier: 4420 00:16:21.705 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:16:21.705 Transport Address: 10.0.0.2 [2024-07-15 20:32:43.030199] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:16:21.705 [2024-07-15 20:32:43.030215] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ad840) on tqpair=0x156aa60 00:16:21.705 [2024-07-15 20:32:43.030225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.705 [2024-07-15 20:32:43.030232] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15ad9c0) on tqpair=0x156aa60 00:16:21.705 [2024-07-15 20:32:43.030237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.705 [2024-07-15 20:32:43.030243] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15adb40) on tqpair=0x156aa60 00:16:21.705 [2024-07-15 20:32:43.030248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.705 [2024-07-15 20:32:43.030254] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15adcc0) on tqpair=0x156aa60 00:16:21.705 [2024-07-15 20:32:43.030259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.705 [2024-07-15 20:32:43.030274] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.030280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.030284] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156aa60) 00:16:21.705 [2024-07-15 20:32:43.030296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.705 [2024-07-15 20:32:43.030328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15adcc0, cid 3, qid 0 00:16:21.705 [2024-07-15 20:32:43.030428] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.705 [2024-07-15 20:32:43.030436] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.705 [2024-07-15 20:32:43.030440] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.030444] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15adcc0) on tqpair=0x156aa60 00:16:21.705 [2024-07-15 20:32:43.030453] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.030458] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.030462] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156aa60) 00:16:21.705 [2024-07-15 20:32:43.030470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.705 [2024-07-15 20:32:43.030494] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15adcc0, cid 3, qid 0 00:16:21.705 [2024-07-15 20:32:43.030607] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.705 [2024-07-15 20:32:43.030615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.705 [2024-07-15 20:32:43.030619] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.030623] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15adcc0) on tqpair=0x156aa60 00:16:21.705 [2024-07-15 20:32:43.030629] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:16:21.705 [2024-07-15 20:32:43.030634] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:16:21.705 [2024-07-15 20:32:43.030645] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.030651] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.030655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156aa60) 00:16:21.705 [2024-07-15 20:32:43.030663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.705 [2024-07-15 20:32:43.030682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15adcc0, cid 3, qid 0 00:16:21.705 [2024-07-15 20:32:43.030746] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.705 [2024-07-15 20:32:43.030753] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.705 [2024-07-15 20:32:43.030757] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.030762] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15adcc0) on tqpair=0x156aa60 00:16:21.705 [2024-07-15 20:32:43.030774] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.030779] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.030783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156aa60) 00:16:21.705 [2024-07-15 20:32:43.030791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.705 [2024-07-15 20:32:43.030810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15adcc0, cid 3, qid 0 00:16:21.705 [2024-07-15 20:32:43.030893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.705 [2024-07-15 20:32:43.030902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.705 [2024-07-15 20:32:43.030906] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.030911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15adcc0) on tqpair=0x156aa60 00:16:21.705 [2024-07-15 20:32:43.030923] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.030928] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.030932] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156aa60) 00:16:21.705 [2024-07-15 20:32:43.030940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.705 [2024-07-15 20:32:43.030961] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15adcc0, cid 3, qid 0 00:16:21.705 [2024-07-15 20:32:43.031019] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.705 [2024-07-15 20:32:43.031026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.705 [2024-07-15 20:32:43.031030] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.031035] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15adcc0) on tqpair=0x156aa60 00:16:21.705 [2024-07-15 20:32:43.031046] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.031051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.031055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156aa60) 00:16:21.705 [2024-07-15 20:32:43.031063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.705 [2024-07-15 20:32:43.031082] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15adcc0, cid 3, qid 0 00:16:21.705 [2024-07-15 20:32:43.031141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.705 [2024-07-15 20:32:43.031148] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.705 [2024-07-15 20:32:43.031152] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.031156] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15adcc0) on tqpair=0x156aa60 00:16:21.705 [2024-07-15 20:32:43.031167] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.031172] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.031176] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156aa60) 00:16:21.705 [2024-07-15 20:32:43.031184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.705 [2024-07-15 20:32:43.031202] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15adcc0, cid 3, qid 0 00:16:21.705 [2024-07-15 20:32:43.031260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.705 [2024-07-15 20:32:43.031267] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.705 [2024-07-15 20:32:43.031271] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.031276] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15adcc0) on tqpair=0x156aa60 00:16:21.705 [2024-07-15 20:32:43.031287] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.031292] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.031296] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156aa60) 00:16:21.705 [2024-07-15 20:32:43.031304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.705 [2024-07-15 20:32:43.031322] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15adcc0, cid 3, qid 0 00:16:21.705 [2024-07-15 20:32:43.031377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.705 [2024-07-15 20:32:43.031384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.705 [2024-07-15 20:32:43.031388] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.031393] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15adcc0) on tqpair=0x156aa60 00:16:21.705 [2024-07-15 20:32:43.031404] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.031409] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.031413] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156aa60) 00:16:21.705 [2024-07-15 20:32:43.031425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.705 [2024-07-15 20:32:43.031443] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15adcc0, cid 3, qid 0 00:16:21.705 [2024-07-15 20:32:43.031501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.705 [2024-07-15 20:32:43.031508] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.705 [2024-07-15 20:32:43.031512] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.031516] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15adcc0) on tqpair=0x156aa60 00:16:21.705 [2024-07-15 20:32:43.031527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.705 [2024-07-15 20:32:43.031532] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.706 [2024-07-15 20:32:43.031536] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156aa60) 00:16:21.706 [2024-07-15 20:32:43.031544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.706 [2024-07-15 20:32:43.031562] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15adcc0, cid 3, qid 0 00:16:21.706 [2024-07-15 20:32:43.031621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.706 [2024-07-15 20:32:43.031629] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.706 [2024-07-15 20:32:43.031634] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.706 [2024-07-15 20:32:43.031638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15adcc0) on tqpair=0x156aa60 00:16:21.706 [2024-07-15 20:32:43.031649] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.706 [2024-07-15 20:32:43.031655] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.706 [2024-07-15 20:32:43.031659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156aa60) 00:16:21.706 [2024-07-15 20:32:43.031666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.706 [2024-07-15 20:32:43.031685] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15adcc0, cid 3, qid 0 00:16:21.706 [2024-07-15 20:32:43.031752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.706 [2024-07-15 20:32:43.031759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.706 [2024-07-15 20:32:43.031763] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.706 [2024-07-15 20:32:43.031767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15adcc0) on tqpair=0x156aa60 00:16:21.706 [2024-07-15 20:32:43.031778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.706 [2024-07-15 20:32:43.031784] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.706 [2024-07-15 20:32:43.031788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156aa60) 00:16:21.706 [2024-07-15 20:32:43.031795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.706 [2024-07-15 20:32:43.031814] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15adcc0, cid 3, qid 0 00:16:21.706 [2024-07-15 20:32:43.035891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.706 [2024-07-15 20:32:43.035909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.706 [2024-07-15 20:32:43.035914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.706 [2024-07-15 20:32:43.035919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15adcc0) on tqpair=0x156aa60 00:16:21.706 [2024-07-15 20:32:43.035932] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.706 [2024-07-15 20:32:43.035938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.706 [2024-07-15 20:32:43.035942] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x156aa60) 00:16:21.706 [2024-07-15 20:32:43.035951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.706 [2024-07-15 20:32:43.035977] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15adcc0, cid 3, qid 0 00:16:21.706 [2024-07-15 20:32:43.036054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.706 [2024-07-15 20:32:43.036061] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.706 [2024-07-15 20:32:43.036065] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.706 [2024-07-15 20:32:43.036069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15adcc0) on tqpair=0x156aa60 00:16:21.706 [2024-07-15 20:32:43.036078] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:16:21.706 00:16:21.706 20:32:43 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:16:21.706 [2024-07-15 20:32:43.079058] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:16:21.706 [2024-07-15 20:32:43.079293] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86720 ] 00:16:21.965 [2024-07-15 20:32:43.221182] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:16:21.965 [2024-07-15 20:32:43.221253] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:16:21.965 [2024-07-15 20:32:43.221261] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:16:21.965 [2024-07-15 20:32:43.221278] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:16:21.965 [2024-07-15 20:32:43.221286] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:16:21.965 [2024-07-15 20:32:43.221423] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:16:21.965 [2024-07-15 20:32:43.221473] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xaf2a60 0 00:16:21.965 [2024-07-15 20:32:43.225894] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:16:21.965 [2024-07-15 20:32:43.225917] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:16:21.965 [2024-07-15 20:32:43.225923] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:16:21.965 [2024-07-15 20:32:43.225927] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:16:21.965 [2024-07-15 20:32:43.225972] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.965 [2024-07-15 20:32:43.225980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.965 [2024-07-15 20:32:43.225984] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaf2a60) 00:16:21.965 [2024-07-15 20:32:43.225999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:16:21.965 [2024-07-15 20:32:43.226030] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35840, cid 0, qid 0 00:16:21.965 [2024-07-15 20:32:43.233889] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.965 [2024-07-15 20:32:43.233915] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.965 [2024-07-15 20:32:43.233921] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.965 [2024-07-15 20:32:43.233926] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35840) on tqpair=0xaf2a60 00:16:21.965 [2024-07-15 20:32:43.233937] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:16:21.965 [2024-07-15 20:32:43.233946] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:16:21.965 [2024-07-15 20:32:43.233954] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:16:21.965 [2024-07-15 20:32:43.233972] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.965 [2024-07-15 20:32:43.233977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.965 [2024-07-15 20:32:43.233981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaf2a60) 00:16:21.965 [2024-07-15 20:32:43.233991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.965 [2024-07-15 20:32:43.234022] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35840, cid 0, qid 0 00:16:21.965 [2024-07-15 20:32:43.234162] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.965 [2024-07-15 20:32:43.234170] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.965 [2024-07-15 20:32:43.234174] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.965 [2024-07-15 20:32:43.234179] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35840) on tqpair=0xaf2a60 00:16:21.965 [2024-07-15 20:32:43.234185] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:16:21.965 [2024-07-15 20:32:43.234193] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:16:21.965 [2024-07-15 20:32:43.234201] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.965 [2024-07-15 20:32:43.234213] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.965 [2024-07-15 20:32:43.234217] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaf2a60) 00:16:21.965 [2024-07-15 20:32:43.234225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.965 [2024-07-15 20:32:43.234245] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35840, cid 0, qid 0 00:16:21.965 [2024-07-15 20:32:43.234917] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.965 [2024-07-15 20:32:43.234933] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.965 [2024-07-15 20:32:43.234938] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.965 [2024-07-15 20:32:43.234943] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35840) on tqpair=0xaf2a60 00:16:21.965 [2024-07-15 20:32:43.234950] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:16:21.965 [2024-07-15 20:32:43.234959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:16:21.965 [2024-07-15 20:32:43.234968] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.965 [2024-07-15 20:32:43.234972] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.965 [2024-07-15 20:32:43.234976] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaf2a60) 00:16:21.965 [2024-07-15 20:32:43.234984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.965 [2024-07-15 20:32:43.235006] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35840, cid 0, qid 0 00:16:21.965 [2024-07-15 20:32:43.235123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.965 [2024-07-15 20:32:43.235130] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.965 [2024-07-15 20:32:43.235134] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.965 [2024-07-15 20:32:43.235138] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35840) on tqpair=0xaf2a60 00:16:21.965 [2024-07-15 20:32:43.235145] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:21.965 [2024-07-15 20:32:43.235155] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.965 [2024-07-15 20:32:43.235160] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.965 [2024-07-15 20:32:43.235164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaf2a60) 00:16:21.965 [2024-07-15 20:32:43.235172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.965 [2024-07-15 20:32:43.235190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35840, cid 0, qid 0 00:16:21.965 [2024-07-15 20:32:43.235817] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.965 [2024-07-15 20:32:43.235832] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.965 [2024-07-15 20:32:43.235837] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.965 [2024-07-15 20:32:43.235841] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35840) on tqpair=0xaf2a60 00:16:21.965 [2024-07-15 20:32:43.235846] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:16:21.965 [2024-07-15 20:32:43.235852] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:16:21.965 [2024-07-15 20:32:43.235861] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:21.965 [2024-07-15 20:32:43.235968] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:16:21.965 [2024-07-15 20:32:43.235974] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:21.965 [2024-07-15 20:32:43.235984] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.965 [2024-07-15 20:32:43.235989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.965 [2024-07-15 20:32:43.235993] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaf2a60) 00:16:21.965 [2024-07-15 20:32:43.236001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.965 [2024-07-15 20:32:43.236023] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35840, cid 0, qid 0 00:16:21.965 [2024-07-15 20:32:43.236592] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.965 [2024-07-15 20:32:43.236607] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.965 [2024-07-15 20:32:43.236611] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.965 [2024-07-15 20:32:43.236616] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35840) on tqpair=0xaf2a60 00:16:21.965 [2024-07-15 20:32:43.236622] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:21.965 [2024-07-15 20:32:43.236633] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.965 [2024-07-15 20:32:43.236638] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.965 [2024-07-15 20:32:43.236642] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaf2a60) 00:16:21.965 [2024-07-15 20:32:43.236650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.965 [2024-07-15 20:32:43.236681] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35840, cid 0, qid 0 00:16:21.965 [2024-07-15 20:32:43.236812] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.965 [2024-07-15 20:32:43.236819] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.965 [2024-07-15 20:32:43.236823] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.236827] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35840) on tqpair=0xaf2a60 00:16:21.966 [2024-07-15 20:32:43.236832] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:21.966 [2024-07-15 20:32:43.236837] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:16:21.966 [2024-07-15 20:32:43.236846] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:16:21.966 [2024-07-15 20:32:43.236857] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:16:21.966 [2024-07-15 20:32:43.236880] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.236886] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaf2a60) 00:16:21.966 [2024-07-15 20:32:43.236895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.966 [2024-07-15 20:32:43.236918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35840, cid 0, qid 0 00:16:21.966 [2024-07-15 20:32:43.237583] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:21.966 [2024-07-15 20:32:43.237597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:21.966 [2024-07-15 20:32:43.237602] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.237607] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaf2a60): datao=0, datal=4096, cccid=0 00:16:21.966 [2024-07-15 20:32:43.237612] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb35840) on tqpair(0xaf2a60): expected_datao=0, payload_size=4096 00:16:21.966 [2024-07-15 20:32:43.237617] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.237626] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.237631] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.237648] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.966 [2024-07-15 20:32:43.237655] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.966 [2024-07-15 20:32:43.237659] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.237663] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35840) on tqpair=0xaf2a60 00:16:21.966 [2024-07-15 20:32:43.237673] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:16:21.966 [2024-07-15 20:32:43.237678] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:16:21.966 [2024-07-15 20:32:43.237683] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:16:21.966 [2024-07-15 20:32:43.237688] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:16:21.966 [2024-07-15 20:32:43.237694] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:16:21.966 [2024-07-15 20:32:43.237699] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:16:21.966 [2024-07-15 20:32:43.237709] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:16:21.966 [2024-07-15 20:32:43.237717] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.237721] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.237725] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaf2a60) 00:16:21.966 [2024-07-15 20:32:43.237734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:21.966 [2024-07-15 20:32:43.237756] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35840, cid 0, qid 0 00:16:21.966 [2024-07-15 20:32:43.241890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.966 [2024-07-15 20:32:43.241908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.966 [2024-07-15 20:32:43.241913] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.241918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35840) on tqpair=0xaf2a60 00:16:21.966 [2024-07-15 20:32:43.241928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.241932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.241937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xaf2a60) 00:16:21.966 [2024-07-15 20:32:43.241945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.966 [2024-07-15 20:32:43.241952] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.241956] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.241960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xaf2a60) 00:16:21.966 [2024-07-15 20:32:43.241966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.966 [2024-07-15 20:32:43.241973] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.241977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.241981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xaf2a60) 00:16:21.966 [2024-07-15 20:32:43.241988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.966 [2024-07-15 20:32:43.241994] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.241998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.242002] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaf2a60) 00:16:21.966 [2024-07-15 20:32:43.242008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.966 [2024-07-15 20:32:43.242014] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:21.966 [2024-07-15 20:32:43.242030] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:21.966 [2024-07-15 20:32:43.242039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.242043] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaf2a60) 00:16:21.966 [2024-07-15 20:32:43.242051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.966 [2024-07-15 20:32:43.242078] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35840, cid 0, qid 0 00:16:21.966 [2024-07-15 20:32:43.242086] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb359c0, cid 1, qid 0 00:16:21.966 [2024-07-15 20:32:43.242092] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35b40, cid 2, qid 0 00:16:21.966 [2024-07-15 20:32:43.242097] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35cc0, cid 3, qid 0 00:16:21.966 [2024-07-15 20:32:43.242102] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35e40, cid 4, qid 0 00:16:21.966 [2024-07-15 20:32:43.242760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.966 [2024-07-15 20:32:43.242775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.966 [2024-07-15 20:32:43.242780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.242784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35e40) on tqpair=0xaf2a60 00:16:21.966 [2024-07-15 20:32:43.242791] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:16:21.966 [2024-07-15 20:32:43.242801] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:21.966 [2024-07-15 20:32:43.242811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:16:21.966 [2024-07-15 20:32:43.242818] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:21.966 [2024-07-15 20:32:43.242825] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.242830] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.242834] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaf2a60) 00:16:21.966 [2024-07-15 20:32:43.242842] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:21.966 [2024-07-15 20:32:43.242864] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35e40, cid 4, qid 0 00:16:21.966 [2024-07-15 20:32:43.243016] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.966 [2024-07-15 20:32:43.243024] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.966 [2024-07-15 20:32:43.243028] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.243032] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35e40) on tqpair=0xaf2a60 00:16:21.966 [2024-07-15 20:32:43.243100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:16:21.966 [2024-07-15 20:32:43.243111] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:21.966 [2024-07-15 20:32:43.243120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.243124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaf2a60) 00:16:21.966 [2024-07-15 20:32:43.243133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.966 [2024-07-15 20:32:43.243154] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35e40, cid 4, qid 0 00:16:21.966 [2024-07-15 20:32:43.243718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:21.966 [2024-07-15 20:32:43.243733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:21.966 [2024-07-15 20:32:43.243738] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.243743] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaf2a60): datao=0, datal=4096, cccid=4 00:16:21.966 [2024-07-15 20:32:43.243748] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb35e40) on tqpair(0xaf2a60): expected_datao=0, payload_size=4096 00:16:21.966 [2024-07-15 20:32:43.243753] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.243761] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.243765] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.243779] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.966 [2024-07-15 20:32:43.243785] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.966 [2024-07-15 20:32:43.243789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.243794] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35e40) on tqpair=0xaf2a60 00:16:21.966 [2024-07-15 20:32:43.243810] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:16:21.966 [2024-07-15 20:32:43.243822] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:16:21.966 [2024-07-15 20:32:43.243834] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:16:21.966 [2024-07-15 20:32:43.243842] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.966 [2024-07-15 20:32:43.243847] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaf2a60) 00:16:21.966 [2024-07-15 20:32:43.243854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.966 [2024-07-15 20:32:43.243888] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35e40, cid 4, qid 0 00:16:21.966 [2024-07-15 20:32:43.244473] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:21.967 [2024-07-15 20:32:43.244488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:21.967 [2024-07-15 20:32:43.244492] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.244496] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaf2a60): datao=0, datal=4096, cccid=4 00:16:21.967 [2024-07-15 20:32:43.244501] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb35e40) on tqpair(0xaf2a60): expected_datao=0, payload_size=4096 00:16:21.967 [2024-07-15 20:32:43.244506] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.244514] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.244518] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.244540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.967 [2024-07-15 20:32:43.244547] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.967 [2024-07-15 20:32:43.244551] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.244556] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35e40) on tqpair=0xaf2a60 00:16:21.967 [2024-07-15 20:32:43.244572] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:21.967 [2024-07-15 20:32:43.244584] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:21.967 [2024-07-15 20:32:43.244594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.244598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaf2a60) 00:16:21.967 [2024-07-15 20:32:43.244606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.967 [2024-07-15 20:32:43.244628] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35e40, cid 4, qid 0 00:16:21.967 [2024-07-15 20:32:43.244929] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:21.967 [2024-07-15 20:32:43.244947] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:21.967 [2024-07-15 20:32:43.244951] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.244956] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaf2a60): datao=0, datal=4096, cccid=4 00:16:21.967 [2024-07-15 20:32:43.244961] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb35e40) on tqpair(0xaf2a60): expected_datao=0, payload_size=4096 00:16:21.967 [2024-07-15 20:32:43.244966] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.244973] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.244978] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.245574] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.967 [2024-07-15 20:32:43.245586] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.967 [2024-07-15 20:32:43.245591] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.245595] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35e40) on tqpair=0xaf2a60 00:16:21.967 [2024-07-15 20:32:43.245605] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:21.967 [2024-07-15 20:32:43.245614] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:16:21.967 [2024-07-15 20:32:43.245626] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:16:21.967 [2024-07-15 20:32:43.245633] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:21.967 [2024-07-15 20:32:43.245639] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:21.967 [2024-07-15 20:32:43.245645] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:16:21.967 [2024-07-15 20:32:43.245651] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:16:21.967 [2024-07-15 20:32:43.245656] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:16:21.967 [2024-07-15 20:32:43.245662] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:16:21.967 [2024-07-15 20:32:43.245680] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.245686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaf2a60) 00:16:21.967 [2024-07-15 20:32:43.245694] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.967 [2024-07-15 20:32:43.245702] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.245706] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.245716] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xaf2a60) 00:16:21.967 [2024-07-15 20:32:43.245722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:16:21.967 [2024-07-15 20:32:43.245751] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35e40, cid 4, qid 0 00:16:21.967 [2024-07-15 20:32:43.245759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35fc0, cid 5, qid 0 00:16:21.967 [2024-07-15 20:32:43.249903] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.967 [2024-07-15 20:32:43.249928] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.967 [2024-07-15 20:32:43.249934] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.249939] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35e40) on tqpair=0xaf2a60 00:16:21.967 [2024-07-15 20:32:43.249947] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.967 [2024-07-15 20:32:43.249954] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.967 [2024-07-15 20:32:43.249958] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.249962] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35fc0) on tqpair=0xaf2a60 00:16:21.967 [2024-07-15 20:32:43.249977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.249982] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xaf2a60) 00:16:21.967 [2024-07-15 20:32:43.249993] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.967 [2024-07-15 20:32:43.250026] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35fc0, cid 5, qid 0 00:16:21.967 [2024-07-15 20:32:43.250270] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.967 [2024-07-15 20:32:43.250299] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.967 [2024-07-15 20:32:43.250304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.250308] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35fc0) on tqpair=0xaf2a60 00:16:21.967 [2024-07-15 20:32:43.250320] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.250325] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xaf2a60) 00:16:21.967 [2024-07-15 20:32:43.250333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.967 [2024-07-15 20:32:43.250353] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35fc0, cid 5, qid 0 00:16:21.967 [2024-07-15 20:32:43.251035] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.967 [2024-07-15 20:32:43.251051] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.967 [2024-07-15 20:32:43.251056] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.251060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35fc0) on tqpair=0xaf2a60 00:16:21.967 [2024-07-15 20:32:43.251072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.251077] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xaf2a60) 00:16:21.967 [2024-07-15 20:32:43.251085] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.967 [2024-07-15 20:32:43.251107] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35fc0, cid 5, qid 0 00:16:21.967 [2024-07-15 20:32:43.251220] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.967 [2024-07-15 20:32:43.251227] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.967 [2024-07-15 20:32:43.251231] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.251235] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35fc0) on tqpair=0xaf2a60 00:16:21.967 [2024-07-15 20:32:43.251261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.251268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xaf2a60) 00:16:21.967 [2024-07-15 20:32:43.251276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.967 [2024-07-15 20:32:43.251284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.251288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xaf2a60) 00:16:21.967 [2024-07-15 20:32:43.251295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.967 [2024-07-15 20:32:43.251303] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.251307] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xaf2a60) 00:16:21.967 [2024-07-15 20:32:43.251314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.967 [2024-07-15 20:32:43.251326] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.251331] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xaf2a60) 00:16:21.967 [2024-07-15 20:32:43.251338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.967 [2024-07-15 20:32:43.251368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35fc0, cid 5, qid 0 00:16:21.967 [2024-07-15 20:32:43.251375] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35e40, cid 4, qid 0 00:16:21.967 [2024-07-15 20:32:43.251380] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb36140, cid 6, qid 0 00:16:21.967 [2024-07-15 20:32:43.251385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb362c0, cid 7, qid 0 00:16:21.967 [2024-07-15 20:32:43.252028] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:21.967 [2024-07-15 20:32:43.252044] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:21.967 [2024-07-15 20:32:43.252049] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.252054] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaf2a60): datao=0, datal=8192, cccid=5 00:16:21.967 [2024-07-15 20:32:43.252059] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb35fc0) on tqpair(0xaf2a60): expected_datao=0, payload_size=8192 00:16:21.967 [2024-07-15 20:32:43.252065] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.252084] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.252090] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:21.967 [2024-07-15 20:32:43.252096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:21.967 [2024-07-15 20:32:43.252102] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:21.967 [2024-07-15 20:32:43.252106] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:21.968 [2024-07-15 20:32:43.252110] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaf2a60): datao=0, datal=512, cccid=4 00:16:21.968 [2024-07-15 20:32:43.252116] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb35e40) on tqpair(0xaf2a60): expected_datao=0, payload_size=512 00:16:21.968 [2024-07-15 20:32:43.252121] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.968 [2024-07-15 20:32:43.252127] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:21.968 [2024-07-15 20:32:43.252131] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:21.968 [2024-07-15 20:32:43.252137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:21.968 [2024-07-15 20:32:43.252143] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:21.968 [2024-07-15 20:32:43.252147] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:21.968 [2024-07-15 20:32:43.252151] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaf2a60): datao=0, datal=512, cccid=6 00:16:21.968 [2024-07-15 20:32:43.252156] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb36140) on tqpair(0xaf2a60): expected_datao=0, payload_size=512 00:16:21.968 [2024-07-15 20:32:43.252161] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.968 [2024-07-15 20:32:43.252167] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:21.968 [2024-07-15 20:32:43.252171] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:21.968 [2024-07-15 20:32:43.252177] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:16:21.968 [2024-07-15 20:32:43.252183] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:16:21.968 [2024-07-15 20:32:43.252187] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:16:21.968 [2024-07-15 20:32:43.252191] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xaf2a60): datao=0, datal=4096, cccid=7 00:16:21.968 [2024-07-15 20:32:43.252195] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb362c0) on tqpair(0xaf2a60): expected_datao=0, payload_size=4096 00:16:21.968 [2024-07-15 20:32:43.252200] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.968 [2024-07-15 20:32:43.252207] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:16:21.968 [2024-07-15 20:32:43.252211] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:16:21.968 [2024-07-15 20:32:43.252220] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.968 [2024-07-15 20:32:43.252226] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.968 [2024-07-15 20:32:43.252230] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.968 [2024-07-15 20:32:43.252234] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35fc0) on tqpair=0xaf2a60 00:16:21.968 [2024-07-15 20:32:43.252255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.968 [2024-07-15 20:32:43.252263] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.968 [2024-07-15 20:32:43.252266] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.968 [2024-07-15 20:32:43.252271] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35e40) on tqpair=0xaf2a60 00:16:21.968 [2024-07-15 20:32:43.252283] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.968 [2024-07-15 20:32:43.252289] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.968 [2024-07-15 20:32:43.252293] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.968 [2024-07-15 20:32:43.252298] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb36140) on tqpair=0xaf2a60 00:16:21.968 [2024-07-15 20:32:43.252305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.968 [2024-07-15 20:32:43.252312] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.968 [2024-07-15 20:32:43.252316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.968 [2024-07-15 20:32:43.252320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb362c0) on tqpair=0xaf2a60 00:16:21.968 ===================================================== 00:16:21.968 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:21.968 ===================================================== 00:16:21.968 Controller Capabilities/Features 00:16:21.968 ================================ 00:16:21.968 Vendor ID: 8086 00:16:21.968 Subsystem Vendor ID: 8086 00:16:21.968 Serial Number: SPDK00000000000001 00:16:21.968 Model Number: SPDK bdev Controller 00:16:21.968 Firmware Version: 24.09 00:16:21.968 Recommended Arb Burst: 6 00:16:21.968 IEEE OUI Identifier: e4 d2 5c 00:16:21.968 Multi-path I/O 00:16:21.968 May have multiple subsystem ports: Yes 00:16:21.968 May have multiple controllers: Yes 00:16:21.968 Associated with SR-IOV VF: No 00:16:21.968 Max Data Transfer Size: 131072 00:16:21.968 Max Number of Namespaces: 32 00:16:21.968 Max Number of I/O Queues: 127 00:16:21.968 NVMe Specification Version (VS): 1.3 00:16:21.968 NVMe Specification Version (Identify): 1.3 00:16:21.968 Maximum Queue Entries: 128 00:16:21.968 Contiguous Queues Required: Yes 00:16:21.968 Arbitration Mechanisms Supported 00:16:21.968 Weighted Round Robin: Not Supported 00:16:21.968 Vendor Specific: Not Supported 00:16:21.968 Reset Timeout: 15000 ms 00:16:21.968 Doorbell Stride: 4 bytes 00:16:21.968 NVM Subsystem Reset: Not Supported 00:16:21.968 Command Sets Supported 00:16:21.968 NVM Command Set: Supported 00:16:21.968 Boot Partition: Not Supported 00:16:21.968 Memory Page Size Minimum: 4096 bytes 00:16:21.968 Memory Page Size Maximum: 4096 bytes 00:16:21.968 Persistent Memory Region: Not Supported 00:16:21.968 Optional Asynchronous Events Supported 00:16:21.968 Namespace Attribute Notices: Supported 00:16:21.968 Firmware Activation Notices: Not Supported 00:16:21.968 ANA Change Notices: Not Supported 00:16:21.968 PLE Aggregate Log Change Notices: Not Supported 00:16:21.968 LBA Status Info Alert Notices: Not Supported 00:16:21.968 EGE Aggregate Log Change Notices: Not Supported 00:16:21.968 Normal NVM Subsystem Shutdown event: Not Supported 00:16:21.968 Zone Descriptor Change Notices: Not Supported 00:16:21.968 Discovery Log Change Notices: Not Supported 00:16:21.968 Controller Attributes 00:16:21.968 128-bit Host Identifier: Supported 00:16:21.968 Non-Operational Permissive Mode: Not Supported 00:16:21.968 NVM Sets: Not Supported 00:16:21.968 Read Recovery Levels: Not Supported 00:16:21.968 Endurance Groups: Not Supported 00:16:21.968 Predictable Latency Mode: Not Supported 00:16:21.968 Traffic Based Keep ALive: Not Supported 00:16:21.968 Namespace Granularity: Not Supported 00:16:21.968 SQ Associations: Not Supported 00:16:21.968 UUID List: Not Supported 00:16:21.968 Multi-Domain Subsystem: Not Supported 00:16:21.968 Fixed Capacity Management: Not Supported 00:16:21.968 Variable Capacity Management: Not Supported 00:16:21.968 Delete Endurance Group: Not Supported 00:16:21.968 Delete NVM Set: Not Supported 00:16:21.968 Extended LBA Formats Supported: Not Supported 00:16:21.968 Flexible Data Placement Supported: Not Supported 00:16:21.968 00:16:21.968 Controller Memory Buffer Support 00:16:21.968 ================================ 00:16:21.968 Supported: No 00:16:21.968 00:16:21.968 Persistent Memory Region Support 00:16:21.968 ================================ 00:16:21.968 Supported: No 00:16:21.968 00:16:21.968 Admin Command Set Attributes 00:16:21.968 ============================ 00:16:21.968 Security Send/Receive: Not Supported 00:16:21.968 Format NVM: Not Supported 00:16:21.968 Firmware Activate/Download: Not Supported 00:16:21.968 Namespace Management: Not Supported 00:16:21.968 Device Self-Test: Not Supported 00:16:21.968 Directives: Not Supported 00:16:21.968 NVMe-MI: Not Supported 00:16:21.968 Virtualization Management: Not Supported 00:16:21.968 Doorbell Buffer Config: Not Supported 00:16:21.968 Get LBA Status Capability: Not Supported 00:16:21.968 Command & Feature Lockdown Capability: Not Supported 00:16:21.968 Abort Command Limit: 4 00:16:21.968 Async Event Request Limit: 4 00:16:21.968 Number of Firmware Slots: N/A 00:16:21.968 Firmware Slot 1 Read-Only: N/A 00:16:21.968 Firmware Activation Without Reset: N/A 00:16:21.968 Multiple Update Detection Support: N/A 00:16:21.968 Firmware Update Granularity: No Information Provided 00:16:21.968 Per-Namespace SMART Log: No 00:16:21.968 Asymmetric Namespace Access Log Page: Not Supported 00:16:21.968 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:16:21.968 Command Effects Log Page: Supported 00:16:21.968 Get Log Page Extended Data: Supported 00:16:21.968 Telemetry Log Pages: Not Supported 00:16:21.968 Persistent Event Log Pages: Not Supported 00:16:21.968 Supported Log Pages Log Page: May Support 00:16:21.968 Commands Supported & Effects Log Page: Not Supported 00:16:21.968 Feature Identifiers & Effects Log Page:May Support 00:16:21.968 NVMe-MI Commands & Effects Log Page: May Support 00:16:21.968 Data Area 4 for Telemetry Log: Not Supported 00:16:21.968 Error Log Page Entries Supported: 128 00:16:21.968 Keep Alive: Supported 00:16:21.968 Keep Alive Granularity: 10000 ms 00:16:21.968 00:16:21.968 NVM Command Set Attributes 00:16:21.968 ========================== 00:16:21.968 Submission Queue Entry Size 00:16:21.968 Max: 64 00:16:21.968 Min: 64 00:16:21.968 Completion Queue Entry Size 00:16:21.968 Max: 16 00:16:21.968 Min: 16 00:16:21.968 Number of Namespaces: 32 00:16:21.968 Compare Command: Supported 00:16:21.968 Write Uncorrectable Command: Not Supported 00:16:21.968 Dataset Management Command: Supported 00:16:21.968 Write Zeroes Command: Supported 00:16:21.968 Set Features Save Field: Not Supported 00:16:21.968 Reservations: Supported 00:16:21.968 Timestamp: Not Supported 00:16:21.968 Copy: Supported 00:16:21.968 Volatile Write Cache: Present 00:16:21.968 Atomic Write Unit (Normal): 1 00:16:21.968 Atomic Write Unit (PFail): 1 00:16:21.968 Atomic Compare & Write Unit: 1 00:16:21.968 Fused Compare & Write: Supported 00:16:21.968 Scatter-Gather List 00:16:21.968 SGL Command Set: Supported 00:16:21.968 SGL Keyed: Supported 00:16:21.968 SGL Bit Bucket Descriptor: Not Supported 00:16:21.968 SGL Metadata Pointer: Not Supported 00:16:21.968 Oversized SGL: Not Supported 00:16:21.968 SGL Metadata Address: Not Supported 00:16:21.968 SGL Offset: Supported 00:16:21.968 Transport SGL Data Block: Not Supported 00:16:21.968 Replay Protected Memory Block: Not Supported 00:16:21.968 00:16:21.968 Firmware Slot Information 00:16:21.969 ========================= 00:16:21.969 Active slot: 1 00:16:21.969 Slot 1 Firmware Revision: 24.09 00:16:21.969 00:16:21.969 00:16:21.969 Commands Supported and Effects 00:16:21.969 ============================== 00:16:21.969 Admin Commands 00:16:21.969 -------------- 00:16:21.969 Get Log Page (02h): Supported 00:16:21.969 Identify (06h): Supported 00:16:21.969 Abort (08h): Supported 00:16:21.969 Set Features (09h): Supported 00:16:21.969 Get Features (0Ah): Supported 00:16:21.969 Asynchronous Event Request (0Ch): Supported 00:16:21.969 Keep Alive (18h): Supported 00:16:21.969 I/O Commands 00:16:21.969 ------------ 00:16:21.969 Flush (00h): Supported LBA-Change 00:16:21.969 Write (01h): Supported LBA-Change 00:16:21.969 Read (02h): Supported 00:16:21.969 Compare (05h): Supported 00:16:21.969 Write Zeroes (08h): Supported LBA-Change 00:16:21.969 Dataset Management (09h): Supported LBA-Change 00:16:21.969 Copy (19h): Supported LBA-Change 00:16:21.969 00:16:21.969 Error Log 00:16:21.969 ========= 00:16:21.969 00:16:21.969 Arbitration 00:16:21.969 =========== 00:16:21.969 Arbitration Burst: 1 00:16:21.969 00:16:21.969 Power Management 00:16:21.969 ================ 00:16:21.969 Number of Power States: 1 00:16:21.969 Current Power State: Power State #0 00:16:21.969 Power State #0: 00:16:21.969 Max Power: 0.00 W 00:16:21.969 Non-Operational State: Operational 00:16:21.969 Entry Latency: Not Reported 00:16:21.969 Exit Latency: Not Reported 00:16:21.969 Relative Read Throughput: 0 00:16:21.969 Relative Read Latency: 0 00:16:21.969 Relative Write Throughput: 0 00:16:21.969 Relative Write Latency: 0 00:16:21.969 Idle Power: Not Reported 00:16:21.969 Active Power: Not Reported 00:16:21.969 Non-Operational Permissive Mode: Not Supported 00:16:21.969 00:16:21.969 Health Information 00:16:21.969 ================== 00:16:21.969 Critical Warnings: 00:16:21.969 Available Spare Space: OK 00:16:21.969 Temperature: OK 00:16:21.969 Device Reliability: OK 00:16:21.969 Read Only: No 00:16:21.969 Volatile Memory Backup: OK 00:16:21.969 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:21.969 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:21.969 Available Spare: 0% 00:16:21.969 Available Spare Threshold: 0% 00:16:21.969 Life Percentage Used:[2024-07-15 20:32:43.252437] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.969 [2024-07-15 20:32:43.252444] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xaf2a60) 00:16:21.969 [2024-07-15 20:32:43.252452] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.969 [2024-07-15 20:32:43.252479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb362c0, cid 7, qid 0 00:16:21.969 [2024-07-15 20:32:43.253171] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.969 [2024-07-15 20:32:43.253189] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.969 [2024-07-15 20:32:43.253194] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.969 [2024-07-15 20:32:43.253198] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb362c0) on tqpair=0xaf2a60 00:16:21.969 [2024-07-15 20:32:43.253245] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:16:21.969 [2024-07-15 20:32:43.253258] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35840) on tqpair=0xaf2a60 00:16:21.969 [2024-07-15 20:32:43.253265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.969 [2024-07-15 20:32:43.253271] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb359c0) on tqpair=0xaf2a60 00:16:21.969 [2024-07-15 20:32:43.253277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.969 [2024-07-15 20:32:43.253282] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35b40) on tqpair=0xaf2a60 00:16:21.969 [2024-07-15 20:32:43.253287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.969 [2024-07-15 20:32:43.253293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35cc0) on tqpair=0xaf2a60 00:16:21.969 [2024-07-15 20:32:43.253298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:21.969 [2024-07-15 20:32:43.253309] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.969 [2024-07-15 20:32:43.253314] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.969 [2024-07-15 20:32:43.253318] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaf2a60) 00:16:21.969 [2024-07-15 20:32:43.253326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.969 [2024-07-15 20:32:43.253355] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35cc0, cid 3, qid 0 00:16:21.969 [2024-07-15 20:32:43.253437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.969 [2024-07-15 20:32:43.253444] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.969 [2024-07-15 20:32:43.253448] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.969 [2024-07-15 20:32:43.253452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35cc0) on tqpair=0xaf2a60 00:16:21.969 [2024-07-15 20:32:43.253461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.969 [2024-07-15 20:32:43.253466] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.969 [2024-07-15 20:32:43.253470] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaf2a60) 00:16:21.969 [2024-07-15 20:32:43.253483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.969 [2024-07-15 20:32:43.253505] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35cc0, cid 3, qid 0 00:16:21.969 [2024-07-15 20:32:43.257898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.969 [2024-07-15 20:32:43.257921] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.969 [2024-07-15 20:32:43.257926] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.969 [2024-07-15 20:32:43.257931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35cc0) on tqpair=0xaf2a60 00:16:21.969 [2024-07-15 20:32:43.257938] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:16:21.969 [2024-07-15 20:32:43.257944] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:16:21.969 [2024-07-15 20:32:43.257956] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:16:21.969 [2024-07-15 20:32:43.257962] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:16:21.969 [2024-07-15 20:32:43.257966] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xaf2a60) 00:16:21.969 [2024-07-15 20:32:43.257976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:21.969 [2024-07-15 20:32:43.258003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb35cc0, cid 3, qid 0 00:16:21.969 [2024-07-15 20:32:43.258087] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:16:21.969 [2024-07-15 20:32:43.258095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:16:21.969 [2024-07-15 20:32:43.258099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:16:21.969 [2024-07-15 20:32:43.258103] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb35cc0) on tqpair=0xaf2a60 00:16:21.969 [2024-07-15 20:32:43.258113] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:16:21.969 0% 00:16:21.969 Data Units Read: 0 00:16:21.969 Data Units Written: 0 00:16:21.969 Host Read Commands: 0 00:16:21.969 Host Write Commands: 0 00:16:21.969 Controller Busy Time: 0 minutes 00:16:21.969 Power Cycles: 0 00:16:21.969 Power On Hours: 0 hours 00:16:21.969 Unsafe Shutdowns: 0 00:16:21.969 Unrecoverable Media Errors: 0 00:16:21.969 Lifetime Error Log Entries: 0 00:16:21.969 Warning Temperature Time: 0 minutes 00:16:21.969 Critical Temperature Time: 0 minutes 00:16:21.969 00:16:21.969 Number of Queues 00:16:21.969 ================ 00:16:21.969 Number of I/O Submission Queues: 127 00:16:21.969 Number of I/O Completion Queues: 127 00:16:21.969 00:16:21.969 Active Namespaces 00:16:21.969 ================= 00:16:21.969 Namespace ID:1 00:16:21.969 Error Recovery Timeout: Unlimited 00:16:21.969 Command Set Identifier: NVM (00h) 00:16:21.969 Deallocate: Supported 00:16:21.969 Deallocated/Unwritten Error: Not Supported 00:16:21.969 Deallocated Read Value: Unknown 00:16:21.969 Deallocate in Write Zeroes: Not Supported 00:16:21.969 Deallocated Guard Field: 0xFFFF 00:16:21.970 Flush: Supported 00:16:21.970 Reservation: Supported 00:16:21.970 Namespace Sharing Capabilities: Multiple Controllers 00:16:21.970 Size (in LBAs): 131072 (0GiB) 00:16:21.970 Capacity (in LBAs): 131072 (0GiB) 00:16:21.970 Utilization (in LBAs): 131072 (0GiB) 00:16:21.970 NGUID: ABCDEF0123456789ABCDEF0123456789 00:16:21.970 EUI64: ABCDEF0123456789 00:16:21.970 UUID: 490826f6-6575-4053-834d-b25d1179759a 00:16:21.970 Thin Provisioning: Not Supported 00:16:21.970 Per-NS Atomic Units: Yes 00:16:21.970 Atomic Boundary Size (Normal): 0 00:16:21.970 Atomic Boundary Size (PFail): 0 00:16:21.970 Atomic Boundary Offset: 0 00:16:21.970 Maximum Single Source Range Length: 65535 00:16:21.970 Maximum Copy Length: 65535 00:16:21.970 Maximum Source Range Count: 1 00:16:21.970 NGUID/EUI64 Never Reused: No 00:16:21.970 Namespace Write Protected: No 00:16:21.970 Number of LBA Formats: 1 00:16:21.970 Current LBA Format: LBA Format #00 00:16:21.970 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:21.970 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:21.970 rmmod nvme_tcp 00:16:21.970 rmmod nvme_fabrics 00:16:21.970 rmmod nvme_keyring 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 86660 ']' 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 86660 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 86660 ']' 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 86660 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86660 00:16:21.970 killing process with pid 86660 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86660' 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 86660 00:16:21.970 20:32:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 86660 00:16:22.227 20:32:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:22.227 20:32:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:22.227 20:32:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:22.227 20:32:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:22.227 20:32:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:22.227 20:32:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.227 20:32:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.227 20:32:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.227 20:32:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:22.227 00:16:22.227 real 0m2.592s 00:16:22.227 user 0m7.573s 00:16:22.227 sys 0m0.585s 00:16:22.227 20:32:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:22.227 ************************************ 00:16:22.227 END TEST nvmf_identify 00:16:22.227 20:32:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:16:22.227 ************************************ 00:16:22.227 20:32:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:22.227 20:32:43 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:22.227 20:32:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:22.227 20:32:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:22.227 20:32:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:22.227 ************************************ 00:16:22.227 START TEST nvmf_perf 00:16:22.227 ************************************ 00:16:22.227 20:32:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:16:22.486 * Looking for test storage... 00:16:22.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:22.486 Cannot find device "nvmf_tgt_br" 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.486 Cannot find device "nvmf_tgt_br2" 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:22.486 Cannot find device "nvmf_tgt_br" 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:22.486 Cannot find device "nvmf_tgt_br2" 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:22.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:22.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:22.486 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:22.487 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:22.487 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:22.487 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:22.487 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:22.487 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:22.487 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:22.487 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:22.487 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:22.487 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:22.487 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:22.745 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:22.745 20:32:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:22.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:16:22.745 00:16:22.745 --- 10.0.0.2 ping statistics --- 00:16:22.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.745 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:22.745 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:22.745 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:16:22.745 00:16:22.745 --- 10.0.0.3 ping statistics --- 00:16:22.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.745 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:22.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:16:22.745 00:16:22.745 --- 10.0.0.1 ping statistics --- 00:16:22.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.745 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=86886 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 86886 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 86886 ']' 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:22.745 20:32:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:22.745 [2024-07-15 20:32:44.181290] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:16:22.745 [2024-07-15 20:32:44.181390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.004 [2024-07-15 20:32:44.323117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:23.004 [2024-07-15 20:32:44.391393] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.004 [2024-07-15 20:32:44.391449] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.004 [2024-07-15 20:32:44.391463] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.004 [2024-07-15 20:32:44.391473] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.004 [2024-07-15 20:32:44.391481] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.004 [2024-07-15 20:32:44.391599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.004 [2024-07-15 20:32:44.394796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.004 [2024-07-15 20:32:44.394926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.004 [2024-07-15 20:32:44.394953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.004 20:32:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:23.004 20:32:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:16:23.004 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:23.004 20:32:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:23.004 20:32:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:23.261 20:32:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.261 20:32:44 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:23.261 20:32:44 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:16:23.826 20:32:45 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:16:23.826 20:32:45 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:16:23.826 20:32:45 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:16:23.826 20:32:45 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:24.391 20:32:45 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:16:24.391 20:32:45 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:16:24.391 20:32:45 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:16:24.391 20:32:45 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:16:24.391 20:32:45 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:24.648 [2024-07-15 20:32:45.930367] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.648 20:32:45 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:24.904 20:32:46 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:24.904 20:32:46 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:25.161 20:32:46 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:16:25.161 20:32:46 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:16:25.420 20:32:46 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:25.677 [2024-07-15 20:32:46.999589] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.677 20:32:47 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:25.933 20:32:47 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:16:25.933 20:32:47 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:25.933 20:32:47 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:16:25.933 20:32:47 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:16:27.302 Initializing NVMe Controllers 00:16:27.302 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:16:27.303 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:16:27.303 Initialization complete. Launching workers. 00:16:27.303 ======================================================== 00:16:27.303 Latency(us) 00:16:27.303 Device Information : IOPS MiB/s Average min max 00:16:27.303 PCIE (0000:00:10.0) NSID 1 from core 0: 24411.70 95.36 1310.15 261.04 6811.10 00:16:27.303 ======================================================== 00:16:27.303 Total : 24411.70 95.36 1310.15 261.04 6811.10 00:16:27.303 00:16:27.303 20:32:48 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:28.238 Initializing NVMe Controllers 00:16:28.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:28.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:28.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:28.238 Initialization complete. Launching workers. 00:16:28.238 ======================================================== 00:16:28.238 Latency(us) 00:16:28.238 Device Information : IOPS MiB/s Average min max 00:16:28.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3445.09 13.46 289.98 116.04 5207.92 00:16:28.238 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.50 0.48 8160.62 6975.52 12040.43 00:16:28.238 ======================================================== 00:16:28.238 Total : 3568.59 13.94 562.37 116.04 12040.43 00:16:28.238 00:16:28.496 20:32:49 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:29.872 Initializing NVMe Controllers 00:16:29.872 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:29.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:29.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:29.872 Initialization complete. Launching workers. 00:16:29.872 ======================================================== 00:16:29.872 Latency(us) 00:16:29.872 Device Information : IOPS MiB/s Average min max 00:16:29.872 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8524.99 33.30 3770.48 734.18 7751.52 00:16:29.872 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2676.00 10.45 12027.39 7211.20 20525.67 00:16:29.872 ======================================================== 00:16:29.872 Total : 11200.99 43.75 5743.12 734.18 20525.67 00:16:29.872 00:16:29.872 20:32:51 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:16:29.872 20:32:51 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:32.407 Initializing NVMe Controllers 00:16:32.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:32.407 Controller IO queue size 128, less than required. 00:16:32.407 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:32.407 Controller IO queue size 128, less than required. 00:16:32.407 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:32.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:32.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:32.407 Initialization complete. Launching workers. 00:16:32.407 ======================================================== 00:16:32.407 Latency(us) 00:16:32.407 Device Information : IOPS MiB/s Average min max 00:16:32.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1482.49 370.62 88540.27 54885.86 143807.83 00:16:32.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 557.56 139.39 253934.86 149327.74 463726.87 00:16:32.407 ======================================================== 00:16:32.407 Total : 2040.05 510.01 133743.54 54885.86 463726.87 00:16:32.407 00:16:32.407 20:32:53 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:16:32.666 Initializing NVMe Controllers 00:16:32.666 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:32.666 Controller IO queue size 128, less than required. 00:16:32.666 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:32.666 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:16:32.666 Controller IO queue size 128, less than required. 00:16:32.666 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:32.666 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:16:32.666 WARNING: Some requested NVMe devices were skipped 00:16:32.666 No valid NVMe controllers or AIO or URING devices found 00:16:32.667 20:32:54 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:16:35.198 Initializing NVMe Controllers 00:16:35.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:35.198 Controller IO queue size 128, less than required. 00:16:35.198 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:35.198 Controller IO queue size 128, less than required. 00:16:35.198 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:35.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:35.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:35.198 Initialization complete. Launching workers. 00:16:35.198 00:16:35.198 ==================== 00:16:35.198 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:16:35.198 TCP transport: 00:16:35.198 polls: 9027 00:16:35.198 idle_polls: 5892 00:16:35.198 sock_completions: 3135 00:16:35.198 nvme_completions: 5921 00:16:35.198 submitted_requests: 8938 00:16:35.198 queued_requests: 1 00:16:35.198 00:16:35.198 ==================== 00:16:35.198 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:16:35.198 TCP transport: 00:16:35.198 polls: 9784 00:16:35.198 idle_polls: 6808 00:16:35.198 sock_completions: 2976 00:16:35.198 nvme_completions: 6069 00:16:35.198 submitted_requests: 9152 00:16:35.198 queued_requests: 1 00:16:35.198 ======================================================== 00:16:35.198 Latency(us) 00:16:35.198 Device Information : IOPS MiB/s Average min max 00:16:35.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1479.75 369.94 88484.25 58329.28 140568.54 00:16:35.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1516.75 379.19 85603.03 38498.67 133046.95 00:16:35.198 ======================================================== 00:16:35.198 Total : 2996.50 749.13 87025.86 38498.67 140568.54 00:16:35.198 00:16:35.198 20:32:56 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:16:35.198 20:32:56 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.764 20:32:56 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:16:35.764 20:32:56 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:35.764 20:32:56 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:16:35.764 20:32:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:35.764 20:32:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:16:35.764 20:32:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:35.764 20:32:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:16:35.764 20:32:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:35.764 20:32:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:35.764 rmmod nvme_tcp 00:16:35.764 rmmod nvme_fabrics 00:16:35.764 rmmod nvme_keyring 00:16:35.764 20:32:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:35.764 20:32:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:16:35.764 20:32:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:16:35.764 20:32:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 86886 ']' 00:16:35.764 20:32:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 86886 00:16:35.764 20:32:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 86886 ']' 00:16:35.764 20:32:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 86886 00:16:35.764 20:32:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:16:35.764 20:32:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:35.764 20:32:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 86886 00:16:35.764 killing process with pid 86886 00:16:35.764 20:32:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:35.764 20:32:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:35.764 20:32:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 86886' 00:16:35.764 20:32:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 86886 00:16:35.764 20:32:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 86886 00:16:36.331 20:32:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:36.331 20:32:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:36.331 20:32:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:36.331 20:32:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:36.331 20:32:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:36.331 20:32:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.331 20:32:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.332 20:32:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.332 20:32:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:36.332 ************************************ 00:16:36.332 END TEST nvmf_perf 00:16:36.332 ************************************ 00:16:36.332 00:16:36.332 real 0m14.071s 00:16:36.332 user 0m52.216s 00:16:36.332 sys 0m3.432s 00:16:36.332 20:32:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:36.332 20:32:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:16:36.332 20:32:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:36.332 20:32:57 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:36.332 20:32:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:36.332 20:32:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:36.332 20:32:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:36.332 ************************************ 00:16:36.332 START TEST nvmf_fio_host 00:16:36.332 ************************************ 00:16:36.332 20:32:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:36.591 * Looking for test storage... 00:16:36.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:36.591 20:32:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:36.591 20:32:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.591 20:32:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:36.592 Cannot find device "nvmf_tgt_br" 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:36.592 Cannot find device "nvmf_tgt_br2" 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:36.592 Cannot find device "nvmf_tgt_br" 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:36.592 Cannot find device "nvmf_tgt_br2" 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:16:36.592 20:32:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:36.592 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:36.592 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:36.592 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.592 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:16:36.592 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:36.592 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:36.592 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:16:36.592 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:36.592 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:36.592 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:36.592 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:36.592 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:36.592 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:36.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:16:36.852 00:16:36.852 --- 10.0.0.2 ping statistics --- 00:16:36.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.852 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:36.852 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:36.852 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:16:36.852 00:16:36.852 --- 10.0.0.3 ping statistics --- 00:16:36.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.852 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:36.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:16:36.852 00:16:36.852 --- 10.0.0.1 ping statistics --- 00:16:36.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.852 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87363 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87363 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 87363 ']' 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:36.852 20:32:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.852 [2024-07-15 20:32:58.313098] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:16:36.852 [2024-07-15 20:32:58.313197] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.117 [2024-07-15 20:32:58.451763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:37.117 [2024-07-15 20:32:58.512543] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.117 [2024-07-15 20:32:58.512794] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.117 [2024-07-15 20:32:58.513028] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:37.117 [2024-07-15 20:32:58.513237] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:37.117 [2024-07-15 20:32:58.513349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.117 [2024-07-15 20:32:58.513600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.117 [2024-07-15 20:32:58.513684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.117 [2024-07-15 20:32:58.513813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:37.117 [2024-07-15 20:32:58.513818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.117 20:32:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:37.117 20:32:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:16:37.117 20:32:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:37.406 [2024-07-15 20:32:58.869246] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:37.664 20:32:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:16:37.664 20:32:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:37.664 20:32:58 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.664 20:32:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:37.923 Malloc1 00:16:37.923 20:32:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:38.182 20:32:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:38.440 20:32:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:38.698 [2024-07-15 20:33:00.074317] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.698 20:33:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:38.956 20:33:00 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:39.214 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:39.214 fio-3.35 00:16:39.214 Starting 1 thread 00:16:41.746 00:16:41.746 test: (groupid=0, jobs=1): err= 0: pid=87481: Mon Jul 15 20:33:02 2024 00:16:41.746 read: IOPS=8690, BW=33.9MiB/s (35.6MB/s)(68.1MiB/2007msec) 00:16:41.746 slat (usec): min=2, max=309, avg= 2.71, stdev= 2.99 00:16:41.746 clat (usec): min=2869, max=13070, avg=7708.06, stdev=657.63 00:16:41.746 lat (usec): min=2908, max=13073, avg=7710.78, stdev=657.38 00:16:41.746 clat percentiles (usec): 00:16:41.746 | 1.00th=[ 6521], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7242], 00:16:41.746 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:16:41.746 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8848], 00:16:41.746 | 99.00th=[ 9634], 99.50th=[10028], 99.90th=[10945], 99.95th=[12387], 00:16:41.746 | 99.99th=[13042] 00:16:41.746 bw ( KiB/s): min=33552, max=35496, per=99.99%, avg=34760.00, stdev=865.55, samples=4 00:16:41.746 iops : min= 8388, max= 8874, avg=8690.00, stdev=216.39, samples=4 00:16:41.746 write: IOPS=8685, BW=33.9MiB/s (35.6MB/s)(68.1MiB/2007msec); 0 zone resets 00:16:41.746 slat (usec): min=2, max=206, avg= 2.83, stdev= 2.10 00:16:41.746 clat (usec): min=2047, max=12966, avg=6959.39, stdev=595.62 00:16:41.746 lat (usec): min=2059, max=12969, avg=6962.22, stdev=595.45 00:16:41.746 clat percentiles (usec): 00:16:41.746 | 1.00th=[ 5800], 5.00th=[ 6194], 10.00th=[ 6325], 20.00th=[ 6521], 00:16:41.746 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7046], 00:16:41.746 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 7570], 95.00th=[ 7898], 00:16:41.746 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[11863], 99.95th=[12256], 00:16:41.746 | 99.99th=[12911] 00:16:41.746 bw ( KiB/s): min=34456, max=34976, per=99.98%, avg=34732.00, stdev=248.26, samples=4 00:16:41.746 iops : min= 8614, max= 8744, avg=8683.00, stdev=62.06, samples=4 00:16:41.746 lat (msec) : 4=0.17%, 10=99.49%, 20=0.34% 00:16:41.746 cpu : usr=65.30%, sys=24.78%, ctx=35, majf=0, minf=7 00:16:41.746 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:41.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:41.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:41.746 issued rwts: total=17442,17431,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:41.746 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:41.746 00:16:41.746 Run status group 0 (all jobs): 00:16:41.746 READ: bw=33.9MiB/s (35.6MB/s), 33.9MiB/s-33.9MiB/s (35.6MB/s-35.6MB/s), io=68.1MiB (71.4MB), run=2007-2007msec 00:16:41.746 WRITE: bw=33.9MiB/s (35.6MB/s), 33.9MiB/s-33.9MiB/s (35.6MB/s-35.6MB/s), io=68.1MiB (71.4MB), run=2007-2007msec 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:41.746 20:33:02 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:41.746 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:41.746 fio-3.35 00:16:41.746 Starting 1 thread 00:16:44.331 00:16:44.332 test: (groupid=0, jobs=1): err= 0: pid=87524: Mon Jul 15 20:33:05 2024 00:16:44.332 read: IOPS=6416, BW=100MiB/s (105MB/s)(202MiB/2014msec) 00:16:44.332 slat (usec): min=3, max=133, avg= 4.15, stdev= 2.17 00:16:44.332 clat (msec): min=2, max=258, avg=12.16, stdev=20.94 00:16:44.332 lat (msec): min=2, max=258, avg=12.16, stdev=20.94 00:16:44.332 clat percentiles (msec): 00:16:44.332 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:16:44.332 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 12], 00:16:44.332 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 14], 95.00th=[ 15], 00:16:44.332 | 99.00th=[ 19], 99.50th=[ 257], 99.90th=[ 259], 99.95th=[ 259], 00:16:44.332 | 99.99th=[ 259] 00:16:44.332 bw ( KiB/s): min=31200, max=64352, per=49.68%, avg=51000.00, stdev=14549.44, samples=4 00:16:44.332 iops : min= 1950, max= 4022, avg=3187.50, stdev=909.34, samples=4 00:16:44.332 write: IOPS=3711, BW=58.0MiB/s (60.8MB/s)(104MiB/1791msec); 0 zone resets 00:16:44.332 slat (usec): min=37, max=243, avg=40.24, stdev= 5.68 00:16:44.332 clat (msec): min=7, max=263, avg=14.07, stdev=18.42 00:16:44.332 lat (msec): min=7, max=263, avg=14.11, stdev=18.42 00:16:44.332 clat percentiles (msec): 00:16:44.332 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:16:44.332 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 13], 00:16:44.332 | 70.00th=[ 14], 80.00th=[ 15], 90.00th=[ 17], 95.00th=[ 18], 00:16:44.332 | 99.00th=[ 23], 99.50th=[ 259], 99.90th=[ 264], 99.95th=[ 264], 00:16:44.332 | 99.99th=[ 264] 00:16:44.332 bw ( KiB/s): min=31904, max=67424, per=89.04%, avg=52880.00, stdev=15667.03, samples=4 00:16:44.332 iops : min= 1994, max= 4214, avg=3305.00, stdev=979.19, samples=4 00:16:44.332 lat (msec) : 4=0.25%, 10=33.98%, 20=64.43%, 50=0.69%, 500=0.65% 00:16:44.332 cpu : usr=74.47%, sys=16.99%, ctx=8, majf=0, minf=18 00:16:44.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:16:44.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:44.332 issued rwts: total=12922,6648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:44.332 00:16:44.332 Run status group 0 (all jobs): 00:16:44.332 READ: bw=100MiB/s (105MB/s), 100MiB/s-100MiB/s (105MB/s-105MB/s), io=202MiB (212MB), run=2014-2014msec 00:16:44.332 WRITE: bw=58.0MiB/s (60.8MB/s), 58.0MiB/s-58.0MiB/s (60.8MB/s-60.8MB/s), io=104MiB (109MB), run=1791-1791msec 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:44.332 rmmod nvme_tcp 00:16:44.332 rmmod nvme_fabrics 00:16:44.332 rmmod nvme_keyring 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 87363 ']' 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 87363 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 87363 ']' 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 87363 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87363 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:44.332 killing process with pid 87363 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87363' 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 87363 00:16:44.332 20:33:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 87363 00:16:44.590 20:33:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:44.590 20:33:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:44.590 20:33:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:44.590 20:33:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:44.590 20:33:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:44.590 20:33:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.590 20:33:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.590 20:33:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.590 20:33:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:44.590 00:16:44.590 real 0m8.178s 00:16:44.590 user 0m33.778s 00:16:44.590 sys 0m2.154s 00:16:44.590 20:33:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:44.590 20:33:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.590 ************************************ 00:16:44.590 END TEST nvmf_fio_host 00:16:44.590 ************************************ 00:16:44.590 20:33:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:44.590 20:33:06 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:44.590 20:33:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:44.590 20:33:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:44.590 20:33:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:44.590 ************************************ 00:16:44.590 START TEST nvmf_failover 00:16:44.590 ************************************ 00:16:44.590 20:33:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:44.849 * Looking for test storage... 00:16:44.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:44.849 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:44.850 Cannot find device "nvmf_tgt_br" 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:44.850 Cannot find device "nvmf_tgt_br2" 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:44.850 Cannot find device "nvmf_tgt_br" 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:44.850 Cannot find device "nvmf_tgt_br2" 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:44.850 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:44.850 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:44.850 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:45.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:16:45.109 00:16:45.109 --- 10.0.0.2 ping statistics --- 00:16:45.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.109 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:45.109 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:45.109 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:16:45.109 00:16:45.109 --- 10.0.0.3 ping statistics --- 00:16:45.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.109 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:45.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:45.109 00:16:45.109 --- 10.0.0.1 ping statistics --- 00:16:45.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.109 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=87743 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 87743 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87743 ']' 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.109 20:33:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:45.109 [2024-07-15 20:33:06.573021] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:16:45.109 [2024-07-15 20:33:06.573114] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.367 [2024-07-15 20:33:06.707563] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:45.367 [2024-07-15 20:33:06.788963] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.367 [2024-07-15 20:33:06.789033] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.367 [2024-07-15 20:33:06.789054] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.367 [2024-07-15 20:33:06.789068] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.367 [2024-07-15 20:33:06.789079] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.367 [2024-07-15 20:33:06.789763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.367 [2024-07-15 20:33:06.789830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:45.367 [2024-07-15 20:33:06.789842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.623 20:33:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.623 20:33:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:16:45.623 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:45.623 20:33:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:45.623 20:33:06 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:45.623 20:33:06 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.623 20:33:06 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:45.881 [2024-07-15 20:33:07.181572] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.881 20:33:07 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:46.138 Malloc0 00:16:46.138 20:33:07 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:46.395 20:33:07 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:46.652 20:33:08 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.908 [2024-07-15 20:33:08.342221] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.908 20:33:08 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:47.185 [2024-07-15 20:33:08.642437] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:47.185 20:33:08 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:47.442 [2024-07-15 20:33:08.906675] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:47.442 20:33:08 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=87847 00:16:47.442 20:33:08 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:47.442 20:33:08 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:47.442 20:33:08 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 87847 /var/tmp/bdevperf.sock 00:16:47.442 20:33:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 87847 ']' 00:16:47.442 20:33:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:47.442 20:33:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:47.442 20:33:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:47.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:47.442 20:33:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:47.442 20:33:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:48.005 20:33:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:48.005 20:33:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:16:48.005 20:33:09 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:48.261 NVMe0n1 00:16:48.261 20:33:09 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:48.518 00:16:48.518 20:33:09 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=87881 00:16:48.518 20:33:09 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:48.518 20:33:09 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:16:49.451 20:33:10 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:50.017 [2024-07-15 20:33:11.219418] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219487] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219499] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219508] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219517] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219527] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219535] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219552] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219560] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219568] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219585] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219593] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219601] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219617] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219625] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219640] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219648] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219664] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219672] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219713] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.219721] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.220180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.220208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.220218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.220226] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.220236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.220245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.220253] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.220261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.220274] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.220283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.220291] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.220300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.220308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.220316] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.220324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.220332] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.220649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.220684] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.017 [2024-07-15 20:33:11.220694] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.018 [2024-07-15 20:33:11.220703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.018 [2024-07-15 20:33:11.220712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.018 [2024-07-15 20:33:11.220720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.018 [2024-07-15 20:33:11.220728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.018 [2024-07-15 20:33:11.220737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.018 [2024-07-15 20:33:11.220745] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.018 [2024-07-15 20:33:11.220753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.018 [2024-07-15 20:33:11.220763] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.018 [2024-07-15 20:33:11.220777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.018 [2024-07-15 20:33:11.220788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.018 [2024-07-15 20:33:11.220797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.018 [2024-07-15 20:33:11.220805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.018 [2024-07-15 20:33:11.220813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125ef80 is same with the state(5) to be set 00:16:50.018 20:33:11 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:16:53.302 20:33:14 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:53.302 00:16:53.302 20:33:14 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:53.560 [2024-07-15 20:33:14.949170] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949227] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949239] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949257] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949316] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949324] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949332] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949349] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949357] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949365] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949382] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949390] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949406] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949423] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949431] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949448] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949473] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949482] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949490] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949523] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949540] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 [2024-07-15 20:33:14.949548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x125fe10 is same with the state(5) to be set 00:16:53.560 20:33:14 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:16:56.910 20:33:17 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.910 [2024-07-15 20:33:18.257963] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.910 20:33:18 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:16:57.850 20:33:19 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:58.108 [2024-07-15 20:33:19.589655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589703] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589712] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589728] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589745] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589769] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589778] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589810] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589827] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589835] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589860] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589884] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589894] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589919] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589927] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589937] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589953] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589962] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589970] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589978] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.589995] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.590004] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.590013] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.590021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.590030] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.590038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.590046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.590054] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.590062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.590070] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.590078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.590087] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.590095] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.590103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.590111] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.590119] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.590127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.590135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.590144] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.590152] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.108 [2024-07-15 20:33:19.590161] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12609a0 is same with the state(5) to be set 00:16:58.366 20:33:19 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 87881 00:17:03.631 0 00:17:03.631 20:33:25 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 87847 00:17:03.631 20:33:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87847 ']' 00:17:03.631 20:33:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87847 00:17:03.631 20:33:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:17:03.631 20:33:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:03.631 20:33:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87847 00:17:03.631 killing process with pid 87847 00:17:03.631 20:33:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:03.631 20:33:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:03.631 20:33:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87847' 00:17:03.631 20:33:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87847 00:17:03.631 20:33:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87847 00:17:03.895 20:33:25 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:03.895 [2024-07-15 20:33:08.973224] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:17:03.895 [2024-07-15 20:33:08.973339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87847 ] 00:17:03.895 [2024-07-15 20:33:09.105671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.895 [2024-07-15 20:33:09.166057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.895 Running I/O for 15 seconds... 00:17:03.895 [2024-07-15 20:33:11.221610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.895 [2024-07-15 20:33:11.221661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.895 [2024-07-15 20:33:11.221689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.895 [2024-07-15 20:33:11.221705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.895 [2024-07-15 20:33:11.221721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.895 [2024-07-15 20:33:11.221735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.895 [2024-07-15 20:33:11.221751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.895 [2024-07-15 20:33:11.221765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.895 [2024-07-15 20:33:11.221781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.895 [2024-07-15 20:33:11.221795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.895 [2024-07-15 20:33:11.221810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.895 [2024-07-15 20:33:11.221823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.895 [2024-07-15 20:33:11.221839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.895 [2024-07-15 20:33:11.221853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.895 [2024-07-15 20:33:11.221884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.895 [2024-07-15 20:33:11.221903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.895 [2024-07-15 20:33:11.221918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.895 [2024-07-15 20:33:11.221932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.895 [2024-07-15 20:33:11.221948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.895 [2024-07-15 20:33:11.221962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.895 [2024-07-15 20:33:11.221984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.895 [2024-07-15 20:33:11.222007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.895 [2024-07-15 20:33:11.222055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:75032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:75040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:75160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:75168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:75192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.222980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.222995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.223009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.223039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.223068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.223098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.223128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.223157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:75248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.223188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.223217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.223264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.896 [2024-07-15 20:33:11.223294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.223324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.223353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.223382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.223419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.223486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.223539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.223572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.223601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.223631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.223661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.223697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.223746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.223781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.223811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.223854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.223902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.223931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.223961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.223976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.223990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.224005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.224019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.224034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.224048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.224063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.224077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.224093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.224107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.224122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.224136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.224160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.224175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.224191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.224204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.224220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.224236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.224270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.224307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.224327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.224352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.224381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.896 [2024-07-15 20:33:11.224415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.896 [2024-07-15 20:33:11.224447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:75520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.224476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.224505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.224533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.224563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:75536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.224590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.224621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.224638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.224659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.224700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.224732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.224758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.224775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.224804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.224835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.224859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.224914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:75584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.224947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.224974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.224989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:75600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:75616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:75624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:75664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:75672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:75688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:75696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:75752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:75784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.225984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.225998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.226013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:75800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.226027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.226042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.226056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.226072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.226085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.226100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.226115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.226130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.226144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.226159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.226173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.226188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.226202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.226217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.226231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.226246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.226260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.226275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:75872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.226297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.226314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.226328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.226343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.897 [2024-07-15 20:33:11.226356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.226392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.897 [2024-07-15 20:33:11.226409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.897 [2024-07-15 20:33:11.226420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75896 len:8 PRP1 0x0 PRP2 0x0 00:17:03.897 [2024-07-15 20:33:11.226434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.226487] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x170ac90 was disconnected and freed. reset controller. 00:17:03.897 [2024-07-15 20:33:11.226505] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:03.897 [2024-07-15 20:33:11.226565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.897 [2024-07-15 20:33:11.226589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.226607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.897 [2024-07-15 20:33:11.226621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.226635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.897 [2024-07-15 20:33:11.226648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.226662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.897 [2024-07-15 20:33:11.226676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:11.226690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:03.897 [2024-07-15 20:33:11.230764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:03.897 [2024-07-15 20:33:11.230818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168ee30 (9): Bad file descriptor 00:17:03.897 [2024-07-15 20:33:11.262261] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:03.897 [2024-07-15 20:33:14.950510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.897 [2024-07-15 20:33:14.950562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:14.950590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.897 [2024-07-15 20:33:14.950607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:14.950651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.897 [2024-07-15 20:33:14.950666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:14.950683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.897 [2024-07-15 20:33:14.950696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:14.950712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.897 [2024-07-15 20:33:14.950726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:14.950741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.897 [2024-07-15 20:33:14.950755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:14.950771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.897 [2024-07-15 20:33:14.950785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:14.950801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.897 [2024-07-15 20:33:14.950814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:14.950831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.897 [2024-07-15 20:33:14.950856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:14.950892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.897 [2024-07-15 20:33:14.950908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:14.950924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.897 [2024-07-15 20:33:14.950937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:14.950953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.897 [2024-07-15 20:33:14.950967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:14.950982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.897 [2024-07-15 20:33:14.950996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:14.951011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.897 [2024-07-15 20:33:14.951025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:14.951041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.897 [2024-07-15 20:33:14.951063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.897 [2024-07-15 20:33:14.951080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.897 [2024-07-15 20:33:14.951094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.898 [2024-07-15 20:33:14.951125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.898 [2024-07-15 20:33:14.951155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.898 [2024-07-15 20:33:14.951184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.898 [2024-07-15 20:33:14.951213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.898 [2024-07-15 20:33:14.951242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.898 [2024-07-15 20:33:14.951271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.898 [2024-07-15 20:33:14.951301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.898 [2024-07-15 20:33:14.951330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.898 [2024-07-15 20:33:14.951359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.951388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.951423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.951470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.951500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.951529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.951559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.951588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.951622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.951652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.951681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.951711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.951740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.951770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.951798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.951828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.951864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.951908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.951938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.951967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.951982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.951996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.952981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.898 [2024-07-15 20:33:14.952998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.898 [2024-07-15 20:33:14.953012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.899 [2024-07-15 20:33:14.953833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.953917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77056 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.953931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.953952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.953963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.953973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77064 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.953987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.954011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.954021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77072 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.954035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.954058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.954068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77080 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.954082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.954109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.954120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77088 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.954133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.954156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.954166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77096 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.954180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.954203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.954213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77104 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.954236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.954260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.954271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77112 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.954284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.954308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.954324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77120 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.954338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.954361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.954372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77128 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.954385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.954409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.954419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77136 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.954432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.954456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.954466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77144 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.954480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.954507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.954517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77152 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.954531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.954554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.954565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77160 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.954578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.954601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.954618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77168 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.954632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.954656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.954667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77176 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.954680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.954703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.954715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76360 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.954729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.954753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.954763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76368 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.954776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.954799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.954810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76376 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.954823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.954846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.954857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76384 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.954881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.954910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.954920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76392 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.954933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.954957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.954967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76400 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.954980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.954994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.899 [2024-07-15 20:33:14.955010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.899 [2024-07-15 20:33:14.955022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76408 len:8 PRP1 0x0 PRP2 0x0 00:17:03.899 [2024-07-15 20:33:14.955035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.955095] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x170cb80 was disconnected and freed. reset controller. 00:17:03.899 [2024-07-15 20:33:14.955114] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:17:03.899 [2024-07-15 20:33:14.955194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.899 [2024-07-15 20:33:14.955217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.955233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.899 [2024-07-15 20:33:14.955246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.899 [2024-07-15 20:33:14.955263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.900 [2024-07-15 20:33:14.955277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:14.955291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.900 [2024-07-15 20:33:14.955304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:14.955318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:03.900 [2024-07-15 20:33:14.959367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:03.900 [2024-07-15 20:33:14.959430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168ee30 (9): Bad file descriptor 00:17:03.900 [2024-07-15 20:33:14.996646] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:03.900 [2024-07-15 20:33:19.589135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.900 [2024-07-15 20:33:19.589203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.589224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.900 [2024-07-15 20:33:19.589238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.589252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.900 [2024-07-15 20:33:19.589266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.589280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.900 [2024-07-15 20:33:19.589292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.589305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168ee30 is same with the state(5) to be set 00:17:03.900 [2024-07-15 20:33:19.590397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.590452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.590478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.590493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.590509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.590522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.590538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.590551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.590566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.590580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.590595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.590608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.590623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.590637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.590652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.590665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.590680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.590694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.590709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.590723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.590738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.590757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.590772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.590785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.590800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.590813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.590839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.590854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.590885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.590902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.590918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.590932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.590948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.590964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.590979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.590993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.591023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.591052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.591083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.591112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.900 [2024-07-15 20:33:19.591141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:3856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.591963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.591986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.592000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.592022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.592037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.592052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.592066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.592082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.592095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.900 [2024-07-15 20:33:19.592111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.900 [2024-07-15 20:33:19.592124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.592974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.592989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.593003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.593032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.593061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.593090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.593118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:03.901 [2024-07-15 20:33:19.593154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.593973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.593986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.594002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.594026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.594042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.594056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.594071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.594085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.594100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.594114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.594129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.594143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.594158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.594171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.594187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.594201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.594216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.594230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.594245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.594258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.594274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.594290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.901 [2024-07-15 20:33:19.594305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.901 [2024-07-15 20:33:19.594319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.902 [2024-07-15 20:33:19.594334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.902 [2024-07-15 20:33:19.594348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.902 [2024-07-15 20:33:19.594363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.902 [2024-07-15 20:33:19.594379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.902 [2024-07-15 20:33:19.594400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171ba60 is same with the state(5) to be set 00:17:03.902 [2024-07-15 20:33:19.594417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:03.902 [2024-07-15 20:33:19.594428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:03.902 [2024-07-15 20:33:19.594439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:8 PRP1 0x0 PRP2 0x0 00:17:03.902 [2024-07-15 20:33:19.594452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.902 [2024-07-15 20:33:19.594499] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x171ba60 was disconnected and freed. reset controller. 00:17:03.902 [2024-07-15 20:33:19.594517] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:17:03.902 [2024-07-15 20:33:19.594532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:03.902 [2024-07-15 20:33:19.598733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:03.902 [2024-07-15 20:33:19.598811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168ee30 (9): Bad file descriptor 00:17:03.902 [2024-07-15 20:33:19.639677] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:03.902 00:17:03.902 Latency(us) 00:17:03.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.902 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:03.902 Verification LBA range: start 0x0 length 0x4000 00:17:03.902 NVMe0n1 : 15.01 8354.42 32.63 211.42 0.00 14908.33 621.85 51713.86 00:17:03.902 =================================================================================================================== 00:17:03.902 Total : 8354.42 32.63 211.42 0.00 14908.33 621.85 51713.86 00:17:03.902 Received shutdown signal, test time was about 15.000000 seconds 00:17:03.902 00:17:03.902 Latency(us) 00:17:03.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.902 =================================================================================================================== 00:17:03.902 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:03.902 20:33:25 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:17:03.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:03.902 20:33:25 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:17:03.902 20:33:25 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:17:03.902 20:33:25 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88080 00:17:03.902 20:33:25 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:17:03.902 20:33:25 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88080 /var/tmp/bdevperf.sock 00:17:03.902 20:33:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 88080 ']' 00:17:03.902 20:33:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:03.902 20:33:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:03.902 20:33:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:03.902 20:33:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:03.902 20:33:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:04.159 20:33:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:04.159 20:33:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:17:04.159 20:33:25 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:04.416 [2024-07-15 20:33:25.842657] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:04.416 20:33:25 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:04.674 [2024-07-15 20:33:26.090887] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:04.674 20:33:26 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:04.932 NVMe0n1 00:17:05.190 20:33:26 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:05.448 00:17:05.448 20:33:26 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:05.706 00:17:05.706 20:33:27 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:05.706 20:33:27 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:17:05.964 20:33:27 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:06.223 20:33:27 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:17:09.521 20:33:30 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:17:09.521 20:33:30 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:09.521 20:33:31 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88203 00:17:09.521 20:33:31 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:09.521 20:33:31 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 88203 00:17:10.989 0 00:17:10.989 20:33:32 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:10.989 [2024-07-15 20:33:25.280580] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:17:10.989 [2024-07-15 20:33:25.280690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88080 ] 00:17:10.989 [2024-07-15 20:33:25.436733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.989 [2024-07-15 20:33:25.508183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.989 [2024-07-15 20:33:27.608156] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:10.989 [2024-07-15 20:33:27.609016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.989 [2024-07-15 20:33:27.610144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.989 [2024-07-15 20:33:27.610290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.989 [2024-07-15 20:33:27.610417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.989 [2024-07-15 20:33:27.610529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.989 [2024-07-15 20:33:27.610646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.989 [2024-07-15 20:33:27.610759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.989 [2024-07-15 20:33:27.610889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.989 [2024-07-15 20:33:27.611047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:10.989 [2024-07-15 20:33:27.611233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x109ae30 (9): Bad file descriptor 00:17:10.989 [2024-07-15 20:33:27.611376] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:10.989 [2024-07-15 20:33:27.620993] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:10.989 Running I/O for 1 seconds... 00:17:10.989 00:17:10.989 Latency(us) 00:17:10.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.989 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:10.989 Verification LBA range: start 0x0 length 0x4000 00:17:10.989 NVMe0n1 : 1.02 8910.02 34.80 0.00 0.00 14295.41 2502.28 16086.11 00:17:10.989 =================================================================================================================== 00:17:10.989 Total : 8910.02 34.80 0.00 0.00 14295.41 2502.28 16086.11 00:17:10.989 20:33:32 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:10.989 20:33:32 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:17:10.990 20:33:32 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:11.556 20:33:32 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:11.556 20:33:32 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:17:11.556 20:33:33 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:12.121 20:33:33 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:17:15.401 20:33:36 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:15.401 20:33:36 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:17:15.401 20:33:36 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 88080 00:17:15.401 20:33:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 88080 ']' 00:17:15.401 20:33:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 88080 00:17:15.401 20:33:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:17:15.401 20:33:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:15.401 20:33:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88080 00:17:15.401 killing process with pid 88080 00:17:15.401 20:33:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:15.401 20:33:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:15.401 20:33:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88080' 00:17:15.401 20:33:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 88080 00:17:15.401 20:33:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 88080 00:17:15.401 20:33:36 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:17:15.401 20:33:36 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:15.967 rmmod nvme_tcp 00:17:15.967 rmmod nvme_fabrics 00:17:15.967 rmmod nvme_keyring 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 87743 ']' 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 87743 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 87743 ']' 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 87743 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 87743 00:17:15.967 killing process with pid 87743 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 87743' 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 87743 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 87743 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.967 20:33:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.226 20:33:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:16.226 ************************************ 00:17:16.226 END TEST nvmf_failover 00:17:16.226 ************************************ 00:17:16.226 00:17:16.226 real 0m31.457s 00:17:16.226 user 2m3.200s 00:17:16.226 sys 0m4.519s 00:17:16.226 20:33:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:16.226 20:33:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:16.226 20:33:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:16.226 20:33:37 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:16.226 20:33:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:16.226 20:33:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:16.227 20:33:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:16.227 ************************************ 00:17:16.227 START TEST nvmf_host_discovery 00:17:16.227 ************************************ 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:16.227 * Looking for test storage... 00:17:16.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:16.227 Cannot find device "nvmf_tgt_br" 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:16.227 Cannot find device "nvmf_tgt_br2" 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:16.227 Cannot find device "nvmf_tgt_br" 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:16.227 Cannot find device "nvmf_tgt_br2" 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:17:16.227 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:16.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:16.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:16.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:17:16.486 00:17:16.486 --- 10.0.0.2 ping statistics --- 00:17:16.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.486 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:16.486 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:16.486 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:17:16.486 00:17:16.486 --- 10.0.0.3 ping statistics --- 00:17:16.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.486 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:16.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:17:16.486 00:17:16.486 --- 10.0.0.1 ping statistics --- 00:17:16.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.486 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:16.486 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=88510 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 88510 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88510 ']' 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.487 20:33:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.746 [2024-07-15 20:33:38.024301] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:17:16.746 [2024-07-15 20:33:38.024403] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.746 [2024-07-15 20:33:38.162599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.746 [2024-07-15 20:33:38.234307] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.746 [2024-07-15 20:33:38.234600] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.746 [2024-07-15 20:33:38.234833] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.746 [2024-07-15 20:33:38.234997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.746 [2024-07-15 20:33:38.235175] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.746 [2024-07-15 20:33:38.235273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.004 [2024-07-15 20:33:38.367434] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.004 [2024-07-15 20:33:38.379606] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.004 null0 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.004 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.005 null1 00:17:17.005 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.005 20:33:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:17:17.005 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.005 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.005 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:17.005 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.005 20:33:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88547 00:17:17.005 20:33:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88547 /tmp/host.sock 00:17:17.005 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 88547 ']' 00:17:17.005 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:17.005 20:33:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:17:17.005 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:17.005 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:17.005 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:17.005 20:33:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.005 [2024-07-15 20:33:38.469313] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:17:17.005 [2024-07-15 20:33:38.469419] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88547 ] 00:17:17.263 [2024-07-15 20:33:38.608772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.263 [2024-07-15 20:33:38.681332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:18.199 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.457 [2024-07-15 20:33:39.919906] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:18.457 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.716 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:17:18.716 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:17:18.716 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:18.716 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:18.716 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.716 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.716 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:18.716 20:33:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:18.716 20:33:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.716 20:33:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:17:18.716 20:33:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:17:18.716 20:33:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:17:18.717 20:33:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:17:19.284 [2024-07-15 20:33:40.563801] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:19.284 [2024-07-15 20:33:40.563850] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:19.284 [2024-07-15 20:33:40.563881] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:19.284 [2024-07-15 20:33:40.651968] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:19.284 [2024-07-15 20:33:40.716059] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:19.284 [2024-07-15 20:33:40.716109] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:19.852 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:19.853 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:17:19.853 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:19.853 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:19.853 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:19.853 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:19.853 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.853 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:19.853 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:19.853 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.853 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:17:19.853 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:19.853 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:17:19.853 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:19.853 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:19.853 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:19.853 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:19.853 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:19.853 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:19.853 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:17:20.111 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.112 [2024-07-15 20:33:41.528589] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:20.112 [2024-07-15 20:33:41.529487] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:20.112 [2024-07-15 20:33:41.529532] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:20.112 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:20.370 [2024-07-15 20:33:41.615569] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:17:20.370 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.370 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:20.370 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:20.370 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:20.370 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:17:20.370 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:20.370 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:20.370 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:20.370 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:20.370 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:20.370 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:20.370 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:20.370 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.370 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:20.370 20:33:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:20.370 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.370 [2024-07-15 20:33:41.676963] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:20.370 [2024-07-15 20:33:41.677014] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:20.370 [2024-07-15 20:33:41.677023] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:20.370 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:17:20.370 20:33:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:21.305 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.564 [2024-07-15 20:33:42.817753] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:21.564 [2024-07-15 20:33:42.817796] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:17:21.564 [2024-07-15 20:33:42.823777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:21.564 [2024-07-15 20:33:42.823821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:21.564 [2024-07-15 20:33:42.823836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:21.564 [2024-07-15 20:33:42.823846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:21.564 [2024-07-15 20:33:42.823856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:21.564 [2024-07-15 20:33:42.823865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:21.564 [2024-07-15 20:33:42.823890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:21.564 [2024-07-15 20:33:42.823900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:21.564 [2024-07-15 20:33:42.823910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed7c50 is same with the state(5) to be set 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.564 [2024-07-15 20:33:42.833734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed7c50 (9): Bad file descriptor 00:17:21.564 [2024-07-15 20:33:42.843764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:21.564 [2024-07-15 20:33:42.843977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:21.564 [2024-07-15 20:33:42.844007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed7c50 with addr=10.0.0.2, port=4420 00:17:21.564 [2024-07-15 20:33:42.844021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed7c50 is same with the state(5) to be set 00:17:21.564 [2024-07-15 20:33:42.844044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed7c50 (9): B 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.564 ad file descriptor 00:17:21.564 [2024-07-15 20:33:42.844062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:21.564 [2024-07-15 20:33:42.844071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:21.564 [2024-07-15 20:33:42.844082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:21.564 [2024-07-15 20:33:42.844100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:21.564 [2024-07-15 20:33:42.853880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:21.564 [2024-07-15 20:33:42.854068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:21.564 [2024-07-15 20:33:42.854095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed7c50 with addr=10.0.0.2, port=4420 00:17:21.564 [2024-07-15 20:33:42.854108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed7c50 is same with the state(5) to be set 00:17:21.564 [2024-07-15 20:33:42.854128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed7c50 (9): Bad file descriptor 00:17:21.564 [2024-07-15 20:33:42.854144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:21.564 [2024-07-15 20:33:42.854154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:21.564 [2024-07-15 20:33:42.854165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:21.564 [2024-07-15 20:33:42.854182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:21.564 [2024-07-15 20:33:42.863971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:21.564 [2024-07-15 20:33:42.864083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:21.564 [2024-07-15 20:33:42.864108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed7c50 with addr=10.0.0.2, port=4420 00:17:21.564 [2024-07-15 20:33:42.864121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed7c50 is same with the state(5) to be set 00:17:21.564 [2024-07-15 20:33:42.864140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed7c50 (9): Bad file descriptor 00:17:21.564 [2024-07-15 20:33:42.864155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:21.564 [2024-07-15 20:33:42.864165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:21.564 [2024-07-15 20:33:42.864175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:21.564 [2024-07-15 20:33:42.864191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:21.564 [2024-07-15 20:33:42.874038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:21.564 [2024-07-15 20:33:42.874128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:21.564 [2024-07-15 20:33:42.874150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed7c50 with addr=10.0.0.2, port=4420 00:17:21.564 [2024-07-15 20:33:42.874161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed7c50 is same with the state(5) to be set 00:17:21.564 [2024-07-15 20:33:42.874178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed7c50 (9): Bad file descriptor 00:17:21.564 [2024-07-15 20:33:42.874193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:21.564 [2024-07-15 20:33:42.874202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:21.564 [2024-07-15 20:33:42.874211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:21.564 [2024-07-15 20:33:42.874226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:21.564 [2024-07-15 20:33:42.884098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:21.564 [2024-07-15 20:33:42.884210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:21.564 [2024-07-15 20:33:42.884235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed7c50 with addr=10.0.0.2, port=4420 00:17:21.564 [2024-07-15 20:33:42.884247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed7c50 is same with the state(5) to be set 00:17:21.564 [2024-07-15 20:33:42.884265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed7c50 (9): Bad file descriptor 00:17:21.564 [2024-07-15 20:33:42.884280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:21.564 [2024-07-15 20:33:42.884290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:21.564 [2024-07-15 20:33:42.884300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:21.564 [2024-07-15 20:33:42.884315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:21.564 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:21.564 [2024-07-15 20:33:42.894167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:21.564 [2024-07-15 20:33:42.894265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:21.564 [2024-07-15 20:33:42.894288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed7c50 with addr=10.0.0.2, port=4420 00:17:21.564 [2024-07-15 20:33:42.894300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed7c50 is same with the state(5) to be set 00:17:21.564 [2024-07-15 20:33:42.894317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed7c50 (9): Bad file descriptor 00:17:21.564 [2024-07-15 20:33:42.894332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:21.564 [2024-07-15 20:33:42.894349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:21.564 [2024-07-15 20:33:42.894359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:21.565 [2024-07-15 20:33:42.894374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:21.565 [2024-07-15 20:33:42.903808] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:17:21.565 [2024-07-15 20:33:42.903844] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.565 20:33:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.565 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.565 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:17:21.565 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:17:21.565 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:21.565 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:21.565 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:21.565 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.565 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.824 20:33:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:22.826 [2024-07-15 20:33:44.244234] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:22.827 [2024-07-15 20:33:44.244279] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:22.827 [2024-07-15 20:33:44.244300] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:23.101 [2024-07-15 20:33:44.330367] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:17:23.101 [2024-07-15 20:33:44.391309] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:23.101 [2024-07-15 20:33:44.391384] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:23.101 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.101 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:23.101 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:23.101 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:23.101 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:23.101 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.101 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.102 2024/07/15 20:33:44 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:17:23.102 request: 00:17:23.102 { 00:17:23.102 "method": "bdev_nvme_start_discovery", 00:17:23.102 "params": { 00:17:23.102 "name": "nvme", 00:17:23.102 "trtype": "tcp", 00:17:23.102 "traddr": "10.0.0.2", 00:17:23.102 "adrfam": "ipv4", 00:17:23.102 "trsvcid": "8009", 00:17:23.102 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:23.102 "wait_for_attach": true 00:17:23.102 } 00:17:23.102 } 00:17:23.102 Got JSON-RPC error response 00:17:23.102 GoRPCClient: error on JSON-RPC call 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.102 2024/07/15 20:33:44 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:17:23.102 request: 00:17:23.102 { 00:17:23.102 "method": "bdev_nvme_start_discovery", 00:17:23.102 "params": { 00:17:23.102 "name": "nvme_second", 00:17:23.102 "trtype": "tcp", 00:17:23.102 "traddr": "10.0.0.2", 00:17:23.102 "adrfam": "ipv4", 00:17:23.102 "trsvcid": "8009", 00:17:23.102 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:23.102 "wait_for_attach": true 00:17:23.102 } 00:17:23.102 } 00:17:23.102 Got JSON-RPC error response 00:17:23.102 GoRPCClient: error on JSON-RPC call 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:17:23.102 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:17:23.360 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:23.360 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:23.360 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.360 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:17:23.360 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:23.360 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:17:23.360 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.360 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:23.360 20:33:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:23.360 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:17:23.360 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:23.360 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:23.360 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.360 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:23.360 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.360 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:23.360 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.360 20:33:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:24.294 [2024-07-15 20:33:45.663766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:24.294 [2024-07-15 20:33:45.663844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed3f00 with addr=10.0.0.2, port=8010 00:17:24.294 [2024-07-15 20:33:45.663881] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:24.294 [2024-07-15 20:33:45.663895] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:24.294 [2024-07-15 20:33:45.663905] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:25.229 [2024-07-15 20:33:46.663755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:25.229 [2024-07-15 20:33:46.663839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed3f00 with addr=10.0.0.2, port=8010 00:17:25.229 [2024-07-15 20:33:46.663863] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:25.229 [2024-07-15 20:33:46.663886] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:25.229 [2024-07-15 20:33:46.663897] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:26.601 [2024-07-15 20:33:47.663599] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:17:26.601 2024/07/15 20:33:47 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:17:26.601 request: 00:17:26.601 { 00:17:26.601 "method": "bdev_nvme_start_discovery", 00:17:26.601 "params": { 00:17:26.601 "name": "nvme_second", 00:17:26.601 "trtype": "tcp", 00:17:26.601 "traddr": "10.0.0.2", 00:17:26.601 "adrfam": "ipv4", 00:17:26.601 "trsvcid": "8010", 00:17:26.601 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:26.601 "wait_for_attach": false, 00:17:26.601 "attach_timeout_ms": 3000 00:17:26.601 } 00:17:26.601 } 00:17:26.601 Got JSON-RPC error response 00:17:26.601 GoRPCClient: error on JSON-RPC call 00:17:26.601 20:33:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:26.601 20:33:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:17:26.601 20:33:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:26.601 20:33:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:26.601 20:33:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:26.601 20:33:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:17:26.601 20:33:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:26.601 20:33:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.601 20:33:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.601 20:33:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:17:26.601 20:33:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:17:26.601 20:33:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:26.601 20:33:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.601 20:33:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:17:26.601 20:33:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:17:26.601 20:33:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88547 00:17:26.601 20:33:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:17:26.601 20:33:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:26.602 rmmod nvme_tcp 00:17:26.602 rmmod nvme_fabrics 00:17:26.602 rmmod nvme_keyring 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 88510 ']' 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 88510 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 88510 ']' 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 88510 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 88510 00:17:26.602 killing process with pid 88510 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 88510' 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 88510 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 88510 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.602 20:33:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.602 20:33:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:26.602 00:17:26.602 real 0m10.501s 00:17:26.602 user 0m21.437s 00:17:26.602 sys 0m1.506s 00:17:26.602 20:33:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:26.602 20:33:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:17:26.602 ************************************ 00:17:26.602 END TEST nvmf_host_discovery 00:17:26.602 ************************************ 00:17:26.602 20:33:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:26.602 20:33:48 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:26.602 20:33:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:26.602 20:33:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:26.602 20:33:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:26.602 ************************************ 00:17:26.602 START TEST nvmf_host_multipath_status 00:17:26.602 ************************************ 00:17:26.602 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:17:26.859 * Looking for test storage... 00:17:26.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:26.859 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:26.859 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:17:26.859 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.859 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.859 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.859 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.859 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.859 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:26.860 Cannot find device "nvmf_tgt_br" 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:17:26.860 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:26.861 Cannot find device "nvmf_tgt_br2" 00:17:26.861 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:17:26.861 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:26.861 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:26.861 Cannot find device "nvmf_tgt_br" 00:17:26.861 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:17:26.861 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:26.861 Cannot find device "nvmf_tgt_br2" 00:17:26.861 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:17:26.861 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:26.861 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:26.861 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:26.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:26.861 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:17:26.861 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:26.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:26.861 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:17:26.861 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:26.861 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:26.861 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:27.118 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:27.118 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:27.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:17:27.119 00:17:27.119 --- 10.0.0.2 ping statistics --- 00:17:27.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.119 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:27.119 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:27.119 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:17:27.119 00:17:27.119 --- 10.0.0.3 ping statistics --- 00:17:27.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.119 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:27.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:17:27.119 00:17:27.119 --- 10.0.0.1 ping statistics --- 00:17:27.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.119 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=89037 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 89037 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89037 ']' 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:27.119 20:33:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:27.119 [2024-07-15 20:33:48.614730] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:17:27.119 [2024-07-15 20:33:48.614827] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.375 [2024-07-15 20:33:48.752967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:27.375 [2024-07-15 20:33:48.824452] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.375 [2024-07-15 20:33:48.824523] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.375 [2024-07-15 20:33:48.824538] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.375 [2024-07-15 20:33:48.824548] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.375 [2024-07-15 20:33:48.824557] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.375 [2024-07-15 20:33:48.824715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.375 [2024-07-15 20:33:48.824730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.306 20:33:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:28.306 20:33:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:17:28.306 20:33:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:28.306 20:33:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:28.306 20:33:49 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:28.306 20:33:49 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.306 20:33:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89037 00:17:28.306 20:33:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:28.562 [2024-07-15 20:33:49.936840] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.562 20:33:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:28.818 Malloc0 00:17:28.818 20:33:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:29.075 20:33:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:29.646 20:33:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:29.646 [2024-07-15 20:33:51.104203] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:29.646 20:33:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:29.915 [2024-07-15 20:33:51.348341] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:29.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:29.915 20:33:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89135 00:17:29.915 20:33:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:29.915 20:33:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:29.915 20:33:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89135 /var/tmp/bdevperf.sock 00:17:29.915 20:33:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 89135 ']' 00:17:29.915 20:33:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:29.915 20:33:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:29.915 20:33:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:29.915 20:33:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:29.915 20:33:51 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:17:31.287 20:33:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.287 20:33:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:17:31.287 20:33:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:31.287 20:33:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:31.854 Nvme0n1 00:17:31.854 20:33:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:32.115 Nvme0n1 00:17:32.115 20:33:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:17:32.115 20:33:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:34.016 20:33:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:17:34.016 20:33:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:17:34.581 20:33:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:34.581 20:33:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:17:35.955 20:33:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:17:35.955 20:33:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:35.955 20:33:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:35.955 20:33:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:35.955 20:33:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:35.955 20:33:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:35.955 20:33:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:35.955 20:33:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:36.213 20:33:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:36.213 20:33:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:36.213 20:33:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.213 20:33:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:36.471 20:33:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:36.471 20:33:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:36.730 20:33:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:36.730 20:33:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.988 20:33:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:36.988 20:33:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:36.988 20:33:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:36.988 20:33:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:37.247 20:33:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:37.247 20:33:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:37.247 20:33:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:37.247 20:33:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:37.505 20:33:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:37.505 20:33:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:17:37.505 20:33:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:38.070 20:33:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:38.328 20:33:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:17:39.277 20:34:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:17:39.277 20:34:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:39.277 20:34:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:39.277 20:34:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:39.534 20:34:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:39.534 20:34:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:39.534 20:34:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:39.534 20:34:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:39.791 20:34:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:39.791 20:34:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:39.791 20:34:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:39.791 20:34:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:40.049 20:34:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:40.049 20:34:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:40.049 20:34:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:40.049 20:34:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:40.616 20:34:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:40.616 20:34:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:40.616 20:34:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:40.616 20:34:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:40.874 20:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:40.874 20:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:40.874 20:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:40.874 20:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:41.134 20:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:41.134 20:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:17:41.134 20:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:41.392 20:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:17:41.651 20:34:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:17:42.584 20:34:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:17:42.584 20:34:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:42.584 20:34:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:42.584 20:34:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:42.842 20:34:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:42.842 20:34:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:42.842 20:34:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:42.842 20:34:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:43.100 20:34:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:43.100 20:34:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:43.100 20:34:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.100 20:34:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:43.358 20:34:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:43.358 20:34:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:43.358 20:34:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.358 20:34:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:43.925 20:34:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:43.925 20:34:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:43.925 20:34:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:43.925 20:34:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:43.925 20:34:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:43.925 20:34:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:43.925 20:34:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:43.925 20:34:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:44.183 20:34:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:44.183 20:34:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:17:44.183 20:34:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:44.441 20:34:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:44.699 20:34:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:17:46.071 20:34:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:17:46.071 20:34:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:46.071 20:34:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.071 20:34:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:46.071 20:34:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:46.071 20:34:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:46.071 20:34:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.071 20:34:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:46.329 20:34:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:46.329 20:34:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:46.329 20:34:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:46.329 20:34:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.586 20:34:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:46.586 20:34:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:46.586 20:34:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:46.586 20:34:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:47.200 20:34:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:47.200 20:34:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:47.200 20:34:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:47.200 20:34:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:47.200 20:34:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:47.200 20:34:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:47.200 20:34:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:47.200 20:34:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:47.473 20:34:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:47.473 20:34:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:17:47.473 20:34:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:48.039 20:34:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:48.039 20:34:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:17:49.415 20:34:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:17:49.415 20:34:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:49.415 20:34:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:49.415 20:34:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:49.415 20:34:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:49.415 20:34:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:17:49.415 20:34:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:49.415 20:34:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:49.982 20:34:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:49.982 20:34:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:49.982 20:34:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:49.982 20:34:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:50.241 20:34:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:50.241 20:34:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:50.241 20:34:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.241 20:34:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:50.498 20:34:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:50.498 20:34:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:50.498 20:34:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.498 20:34:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:50.756 20:34:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:50.756 20:34:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:17:50.756 20:34:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:50.756 20:34:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:51.014 20:34:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:51.014 20:34:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:17:51.014 20:34:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:51.580 20:34:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:51.580 20:34:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:17:52.953 20:34:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:17:52.953 20:34:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:52.953 20:34:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:52.953 20:34:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:52.953 20:34:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:52.953 20:34:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:52.953 20:34:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:52.953 20:34:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:53.211 20:34:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:53.211 20:34:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:53.211 20:34:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:53.211 20:34:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:53.469 20:34:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:53.469 20:34:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:53.469 20:34:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:53.469 20:34:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:54.036 20:34:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:54.036 20:34:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:17:54.036 20:34:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.036 20:34:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:54.036 20:34:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:54.036 20:34:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:54.036 20:34:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:54.036 20:34:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:54.601 20:34:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:54.601 20:34:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:17:54.857 20:34:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:17:54.857 20:34:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:17:55.114 20:34:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:55.371 20:34:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:17:56.305 20:34:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:17:56.305 20:34:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:17:56.305 20:34:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:56.305 20:34:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:56.565 20:34:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:56.565 20:34:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:56.565 20:34:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:56.565 20:34:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:17:57.132 20:34:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:57.132 20:34:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:17:57.132 20:34:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.132 20:34:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:17:57.391 20:34:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:57.391 20:34:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:17:57.391 20:34:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:17:57.391 20:34:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.650 20:34:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:57.650 20:34:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:17:57.650 20:34:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.650 20:34:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:17:57.908 20:34:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:57.908 20:34:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:17:57.908 20:34:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:57.908 20:34:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:17:58.167 20:34:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:17:58.167 20:34:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:17:58.167 20:34:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:58.432 20:34:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:58.696 20:34:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:17:59.631 20:34:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:17:59.631 20:34:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:17:59.632 20:34:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:59.632 20:34:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:17:59.890 20:34:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:17:59.890 20:34:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:17:59.890 20:34:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:17:59.890 20:34:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:00.149 20:34:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:00.149 20:34:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:00.149 20:34:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.149 20:34:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:00.715 20:34:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:00.715 20:34:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:00.715 20:34:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:00.715 20:34:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:00.974 20:34:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:00.974 20:34:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:00.974 20:34:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:00.974 20:34:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:01.232 20:34:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:01.232 20:34:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:01.232 20:34:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:01.232 20:34:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:01.490 20:34:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:01.490 20:34:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:18:01.490 20:34:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:01.748 20:34:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:18:02.005 20:34:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:18:02.937 20:34:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:18:02.937 20:34:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:02.937 20:34:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:02.937 20:34:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:03.502 20:34:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:03.502 20:34:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:03.502 20:34:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:03.502 20:34:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:03.759 20:34:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:03.759 20:34:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:03.759 20:34:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:03.759 20:34:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:04.016 20:34:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:04.016 20:34:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:04.016 20:34:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:04.016 20:34:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.274 20:34:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:04.274 20:34:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:04.274 20:34:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.274 20:34:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:04.532 20:34:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:04.532 20:34:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:04.532 20:34:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:04.532 20:34:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:04.790 20:34:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:04.790 20:34:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:18:04.790 20:34:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:05.048 20:34:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:05.306 20:34:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:18:06.246 20:34:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:18:06.246 20:34:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:06.246 20:34:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:06.246 20:34:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:06.822 20:34:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:06.822 20:34:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:06.822 20:34:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:06.822 20:34:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:07.079 20:34:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:07.079 20:34:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:07.079 20:34:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:07.079 20:34:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:07.337 20:34:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:07.337 20:34:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:07.337 20:34:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:07.337 20:34:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:07.594 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:07.594 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:07.594 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:07.594 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:08.161 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:08.161 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:08.161 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:08.161 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:08.432 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:08.432 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89135 00:18:08.432 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89135 ']' 00:18:08.432 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89135 00:18:08.432 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:18:08.432 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:08.432 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89135 00:18:08.432 killing process with pid 89135 00:18:08.432 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:08.432 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:08.432 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89135' 00:18:08.432 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89135 00:18:08.432 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89135 00:18:08.432 Connection closed with partial response: 00:18:08.432 00:18:08.432 00:18:08.432 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89135 00:18:08.432 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:08.432 [2024-07-15 20:33:51.418925] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:18:08.432 [2024-07-15 20:33:51.419040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89135 ] 00:18:08.432 [2024-07-15 20:33:51.552022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.432 [2024-07-15 20:33:51.610534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.432 Running I/O for 90 seconds... 00:18:08.432 [2024-07-15 20:34:09.211117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.432 [2024-07-15 20:34:09.211218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:08.432 [2024-07-15 20:34:09.211288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.432 [2024-07-15 20:34:09.211311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:08.432 [2024-07-15 20:34:09.211335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.432 [2024-07-15 20:34:09.211351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:08.432 [2024-07-15 20:34:09.211372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.432 [2024-07-15 20:34:09.211389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:08.432 [2024-07-15 20:34:09.211411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.432 [2024-07-15 20:34:09.211427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:08.432 [2024-07-15 20:34:09.211449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.432 [2024-07-15 20:34:09.211464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:08.432 [2024-07-15 20:34:09.211486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.432 [2024-07-15 20:34:09.211502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:08.432 [2024-07-15 20:34:09.211523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.432 [2024-07-15 20:34:09.211539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:08.432 [2024-07-15 20:34:09.211561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.432 [2024-07-15 20:34:09.211577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:08.432 [2024-07-15 20:34:09.211599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.432 [2024-07-15 20:34:09.211621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:08.432 [2024-07-15 20:34:09.211642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.432 [2024-07-15 20:34:09.211678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:08.432 [2024-07-15 20:34:09.211703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.432 [2024-07-15 20:34:09.211720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:08.432 [2024-07-15 20:34:09.211742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.432 [2024-07-15 20:34:09.211757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:08.432 [2024-07-15 20:34:09.211779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.432 [2024-07-15 20:34:09.211794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:08.432 [2024-07-15 20:34:09.211816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.432 [2024-07-15 20:34:09.211831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:08.432 [2024-07-15 20:34:09.211853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.432 [2024-07-15 20:34:09.211882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:08.432 [2024-07-15 20:34:09.211907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.432 [2024-07-15 20:34:09.211923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:08.432 [2024-07-15 20:34:09.211953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.432 [2024-07-15 20:34:09.211983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:08.432 [2024-07-15 20:34:09.212009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.432 [2024-07-15 20:34:09.212026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:08.432 [2024-07-15 20:34:09.212047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.432 [2024-07-15 20:34:09.212063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:08.432 [2024-07-15 20:34:09.212084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.212099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.212120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.433 [2024-07-15 20:34:09.212136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.212160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.433 [2024-07-15 20:34:09.212175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.212221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.433 [2024-07-15 20:34:09.212239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.212261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.433 [2024-07-15 20:34:09.212277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.212299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.433 [2024-07-15 20:34:09.212315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.212336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.433 [2024-07-15 20:34:09.212353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.212374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.433 [2024-07-15 20:34:09.212390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.212412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.433 [2024-07-15 20:34:09.212427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.212449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.433 [2024-07-15 20:34:09.212465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.212487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.433 [2024-07-15 20:34:09.212502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.212524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.433 [2024-07-15 20:34:09.212540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.212562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.433 [2024-07-15 20:34:09.212578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.212600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.433 [2024-07-15 20:34:09.212616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.212638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.433 [2024-07-15 20:34:09.212654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.212700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.433 [2024-07-15 20:34:09.212719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.212741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.212757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.212780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.212807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.212830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.212846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.213600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.213630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.213662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.213680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.213707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.213724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.213750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.213766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.213792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.213808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.213834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.213850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.213891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.213910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.213937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.213963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.213989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.214018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.214047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.214063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.214090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.214106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.214133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.214149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.214175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.214191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.214217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.214233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.214258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.214274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.214300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.214316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.214342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.214358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.214383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.433 [2024-07-15 20:34:09.214399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.214425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.433 [2024-07-15 20:34:09.214441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.214467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.433 [2024-07-15 20:34:09.214483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.214509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.433 [2024-07-15 20:34:09.214532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.214561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.433 [2024-07-15 20:34:09.214577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.214604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.433 [2024-07-15 20:34:09.214620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:08.433 [2024-07-15 20:34:09.214646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.214662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.214688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.214703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.214729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.214745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.214771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.214787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.214812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.214828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.214854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.214884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.214913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.214930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.214956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.214972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.214998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.215013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.215040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.215056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.215105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.215122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.215148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.215164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.215190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.215206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.215232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.215248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.215275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.215291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.215317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.215332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.215358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.215374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.215400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.215416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.215442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.215458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.215484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.215499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.215525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.215541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.215567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.215582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.215617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.215634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.215660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.215676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.215701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.215717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.215743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.215759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.216029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.216058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.216093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.216111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.216141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.216158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.216187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.216203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.216234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.216250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.216280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.216296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.216332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.216349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.216379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.216395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.216425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.216454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.216486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.216502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.216532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.216548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.216578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.216594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.216624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.216640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.216669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.216702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.216733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.216750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.216781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.216796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.216826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.216848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.216890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.216919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:08.434 [2024-07-15 20:34:09.216949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.434 [2024-07-15 20:34:09.216965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:09.216995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:09.217011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:09.217041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:09.217066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:09.217097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:09.217114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:09.217146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:09.217163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:09.217193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:09.217208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:09.217239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:09.217255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.691840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.691929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.691966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.691984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.692023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.692059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.692095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.692131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.692167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:26.692230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:26.692271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:26.692307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:26.692350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:26.692386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:26.692421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.692457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.692492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.692528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.692563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.692601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.692637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.692673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:26.692735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:26.692774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:26.692810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:26.692847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:26.692900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.692922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:26.692937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.694843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:26.694885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.694916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:26.694934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.694955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.694972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.694993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.695008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.695029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.695045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.695067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.695083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.695127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.695145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.695167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.695183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.695204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.695220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.695241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.695256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.695278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.435 [2024-07-15 20:34:26.695293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.695315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:26.695330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:08.435 [2024-07-15 20:34:26.695351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.435 [2024-07-15 20:34:26.695367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.695389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.436 [2024-07-15 20:34:26.695404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.695425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.436 [2024-07-15 20:34:26.695440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.695462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.436 [2024-07-15 20:34:26.695477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.695498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.436 [2024-07-15 20:34:26.695514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.695535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.436 [2024-07-15 20:34:26.695550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.695572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.436 [2024-07-15 20:34:26.695595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.695618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.436 [2024-07-15 20:34:26.695634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.695655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.436 [2024-07-15 20:34:26.695670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.695701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.436 [2024-07-15 20:34:26.695717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.695738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.436 [2024-07-15 20:34:26.695753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.695775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.436 [2024-07-15 20:34:26.695790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.695812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.436 [2024-07-15 20:34:26.695827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.695848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.436 [2024-07-15 20:34:26.695863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.695899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.436 [2024-07-15 20:34:26.695916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.695937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.436 [2024-07-15 20:34:26.695952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.695973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.436 [2024-07-15 20:34:26.695989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.696011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.436 [2024-07-15 20:34:26.696026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.696047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.436 [2024-07-15 20:34:26.696070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.696100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.436 [2024-07-15 20:34:26.696117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.696138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.436 [2024-07-15 20:34:26.696154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.696175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.436 [2024-07-15 20:34:26.696191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.696212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.436 [2024-07-15 20:34:26.696227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.696249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.436 [2024-07-15 20:34:26.696264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.696285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.436 [2024-07-15 20:34:26.696301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.696322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.436 [2024-07-15 20:34:26.696337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.696358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.436 [2024-07-15 20:34:26.696373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.696395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.436 [2024-07-15 20:34:26.696410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.696431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.436 [2024-07-15 20:34:26.696447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.696468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.436 [2024-07-15 20:34:26.696483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.696504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.436 [2024-07-15 20:34:26.696520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.696549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.436 [2024-07-15 20:34:26.696565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.696588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.436 [2024-07-15 20:34:26.696603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.696624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.436 [2024-07-15 20:34:26.696639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.696660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.436 [2024-07-15 20:34:26.696688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.436 [2024-07-15 20:34:26.696713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.436 [2024-07-15 20:34:26.696728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.696750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.696765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.696786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.696802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.698807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.698841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.698886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.698906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.698928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.698944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.698966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.698982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.699004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.699020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.699056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.699073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.699095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.699110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.699132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.699148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.699169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.699184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.699206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.699221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.699243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.699259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.699280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.699296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.699318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.699333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.699365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.699380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.699401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.699417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.699438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.699453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.699475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.699490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.699511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.699535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.699558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.699574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.699595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.699611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.699633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.699648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.699669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.699684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.699706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.699722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.700605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.700632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.700659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.700689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.700715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.700732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.700754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.700770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.700791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.700807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.700828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.700843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.700865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.700906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.700932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.700956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.700978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.700993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.701015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.701030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.701052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.701067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.701089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.701104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.701125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.701141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.701163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.437 [2024-07-15 20:34:26.701178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.701200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.701215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.701236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.701252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.701273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.701288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.701309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.701325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.701346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.701361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:08.437 [2024-07-15 20:34:26.701392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.437 [2024-07-15 20:34:26.701408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.701430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.701445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.701466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.701482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.701503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.701518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.701540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.701555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.701576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.701591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.701616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.438 [2024-07-15 20:34:26.701632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.701653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.438 [2024-07-15 20:34:26.701669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.701690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.438 [2024-07-15 20:34:26.701705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.701726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.438 [2024-07-15 20:34:26.701742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.701764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.438 [2024-07-15 20:34:26.701779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.703055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.703086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.703127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.703145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.703167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.703183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.703205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.703221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.703244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.703259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.703280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.703295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.703317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.703332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.703353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.438 [2024-07-15 20:34:26.703368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.703389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.438 [2024-07-15 20:34:26.703405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.703426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.438 [2024-07-15 20:34:26.703441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.703462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.438 [2024-07-15 20:34:26.703477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.703500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.438 [2024-07-15 20:34:26.703515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.703536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.438 [2024-07-15 20:34:26.703551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.703572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.438 [2024-07-15 20:34:26.703601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.703625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.438 [2024-07-15 20:34:26.703641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.703662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.703677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.703698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.703714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.704136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.438 [2024-07-15 20:34:26.704162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.704189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.438 [2024-07-15 20:34:26.704206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.704228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.704244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.704266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.704282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.704303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.704318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.704339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.704355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.704376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.704391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.704413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.704428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.704449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.704476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.704499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.704515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.704537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.438 [2024-07-15 20:34:26.704552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.704574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.704589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.704610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.704626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.704647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.438 [2024-07-15 20:34:26.704663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.704697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.438 [2024-07-15 20:34:26.704715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.704737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.704753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.704774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.438 [2024-07-15 20:34:26.704789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:08.438 [2024-07-15 20:34:26.704810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.438 [2024-07-15 20:34:26.704826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.704847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.704862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.704900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.704917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.704939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.704963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.704986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.705003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.705033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.439 [2024-07-15 20:34:26.705048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.705070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.439 [2024-07-15 20:34:26.705084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.705106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.439 [2024-07-15 20:34:26.705122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.439 [2024-07-15 20:34:26.706075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.439 [2024-07-15 20:34:26.706120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.439 [2024-07-15 20:34:26.706158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.439 [2024-07-15 20:34:26.706195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.439 [2024-07-15 20:34:26.706232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.706273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.706310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.706347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.706398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.706435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.706472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.706508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.706545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.439 [2024-07-15 20:34:26.706582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.439 [2024-07-15 20:34:26.706619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.439 [2024-07-15 20:34:26.706660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.439 [2024-07-15 20:34:26.706697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.706734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.439 [2024-07-15 20:34:26.706770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.706807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.706854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.706909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.706946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.706967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.706982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.707003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.439 [2024-07-15 20:34:26.707019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.707040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.707056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.707077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.707092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.707114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.707129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.707150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.707165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.707187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.439 [2024-07-15 20:34:26.707202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.708648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.708688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.708719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.708736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.708758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.439 [2024-07-15 20:34:26.708787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.708811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.439 [2024-07-15 20:34:26.708827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.708848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.439 [2024-07-15 20:34:26.708863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.708904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.439 [2024-07-15 20:34:26.708922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:08.439 [2024-07-15 20:34:26.708943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.439 [2024-07-15 20:34:26.708959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.708980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.440 [2024-07-15 20:34:26.708995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.709017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.440 [2024-07-15 20:34:26.709032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.709053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.709068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.709089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.709104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.709125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.709140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.709161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.709176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.709198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.440 [2024-07-15 20:34:26.709214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.709761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.440 [2024-07-15 20:34:26.709799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.709828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.709845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.709883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.709902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.709924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.709940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.709961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.709976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.709997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.710012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.710034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.710049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.710070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.440 [2024-07-15 20:34:26.710085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.710106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.710121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.710142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.710158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.710179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.710194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.710215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.710230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.710251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.710266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.710299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.710315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.710336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.710351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.710372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.710387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.710408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.710423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.710445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.710460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.710928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.710955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.710981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.710999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.711021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.711037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.711058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.711074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.711095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.440 [2024-07-15 20:34:26.711111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.711133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.440 [2024-07-15 20:34:26.711148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.711170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.440 [2024-07-15 20:34:26.711185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.711220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.711237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.711259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.711274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.711296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.440 [2024-07-15 20:34:26.711311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.714821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.714880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.714916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.440 [2024-07-15 20:34:26.714934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:08.440 [2024-07-15 20:34:26.714959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.441 [2024-07-15 20:34:26.714975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.714997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.441 [2024-07-15 20:34:26.715013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.715034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.441 [2024-07-15 20:34:26.715050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.715072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.441 [2024-07-15 20:34:26.715088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.715110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.441 [2024-07-15 20:34:26.715125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.715148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.441 [2024-07-15 20:34:26.715163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.715185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.441 [2024-07-15 20:34:26.715200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.715222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.715267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.715290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.441 [2024-07-15 20:34:26.715307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.715328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.441 [2024-07-15 20:34:26.715343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.715365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.441 [2024-07-15 20:34:26.715381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.715403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.441 [2024-07-15 20:34:26.715418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.715439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.441 [2024-07-15 20:34:26.715455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.715477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.441 [2024-07-15 20:34:26.715492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.715515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.441 [2024-07-15 20:34:26.715530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.715554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.441 [2024-07-15 20:34:26.715569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.715591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.441 [2024-07-15 20:34:26.715606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.715627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.715643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.715665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.441 [2024-07-15 20:34:26.715680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.715702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.715727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.716842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.716897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.716927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.716944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.716966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.716982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.441 [2024-07-15 20:34:26.717778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:08.441 [2024-07-15 20:34:26.717809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.717826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.717847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.717862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.717901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.717918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.717941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.717957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.717978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.717994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.718015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.718038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.718060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.718075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.718097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.718112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.718133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.718149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.718170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.718186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.718207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.442 [2024-07-15 20:34:26.718222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.718244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.442 [2024-07-15 20:34:26.718259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.718280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.442 [2024-07-15 20:34:26.718304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.718327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.442 [2024-07-15 20:34:26.718343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.718365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.718380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.718401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.442 [2024-07-15 20:34:26.718416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.718438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.442 [2024-07-15 20:34:26.718453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.718474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.442 [2024-07-15 20:34:26.718489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.718510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.442 [2024-07-15 20:34:26.718526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.718548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.718564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.718586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.718602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.722315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.442 [2024-07-15 20:34:26.722357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.722388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.722406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.722427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.722444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.722465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.722495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.722519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.722535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.722557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.722572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.722594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.722609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.722631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.722646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.722667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.722683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.722704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.722719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.722740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.722756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.722777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.722793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.722814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.722829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.722851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.722881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.722906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.722922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.722944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.722959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.722994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.723011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.723033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.723049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.723070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.723086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.723107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.723123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.723144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.723160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.723181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.723197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.723218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.723233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:08.442 [2024-07-15 20:34:26.723255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.442 [2024-07-15 20:34:26.723270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.723291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.723306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.723328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.723344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.723371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.723387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.723408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.723423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.723453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.443 [2024-07-15 20:34:26.723470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.723492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.723508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.724401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.724430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.724457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.724474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.724497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.724513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.724534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.724550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.724572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.724588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.724609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.724625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.724646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.724661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.724695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.724713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.724735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.724751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.724772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.724788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.724809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.724835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.724859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.724890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.724915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.724931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.724952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.724967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.724988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.725004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.725025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.725041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.725062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.725077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.725099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.443 [2024-07-15 20:34:26.725114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.725135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.443 [2024-07-15 20:34:26.725151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.725172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.443 [2024-07-15 20:34:26.725187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.725209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.443 [2024-07-15 20:34:26.725224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.725246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.443 [2024-07-15 20:34:26.725261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.725282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.443 [2024-07-15 20:34:26.725308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.725331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.443 [2024-07-15 20:34:26.725347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.725369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.443 [2024-07-15 20:34:26.725384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.725406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.443 [2024-07-15 20:34:26.725421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.725443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.443 [2024-07-15 20:34:26.725458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.725480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.725495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.726007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.443 [2024-07-15 20:34:26.726044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.726071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.443 [2024-07-15 20:34:26.726088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.726111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.443 [2024-07-15 20:34:26.726127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.726148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.443 [2024-07-15 20:34:26.726164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.726185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.443 [2024-07-15 20:34:26.726200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.726222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.726237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.726258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.726285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.726309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.726325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.726347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.726362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:08.443 [2024-07-15 20:34:26.726383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.443 [2024-07-15 20:34:26.726398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.726420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.444 [2024-07-15 20:34:26.726435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.726457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.444 [2024-07-15 20:34:26.726472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.726494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.444 [2024-07-15 20:34:26.726510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.726531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.444 [2024-07-15 20:34:26.726546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.726567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.444 [2024-07-15 20:34:26.726591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.726613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.444 [2024-07-15 20:34:26.726628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.726649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.444 [2024-07-15 20:34:26.726664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.726686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.444 [2024-07-15 20:34:26.726701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.726723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.444 [2024-07-15 20:34:26.726738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.726770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.444 [2024-07-15 20:34:26.726787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.726809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.444 [2024-07-15 20:34:26.726824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.726845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.444 [2024-07-15 20:34:26.726860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.726898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.444 [2024-07-15 20:34:26.726914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.726936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.444 [2024-07-15 20:34:26.726951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.726972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.444 [2024-07-15 20:34:26.726987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.727008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.444 [2024-07-15 20:34:26.727023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.727045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.444 [2024-07-15 20:34:26.727060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.727081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.444 [2024-07-15 20:34:26.727096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.727117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.444 [2024-07-15 20:34:26.727133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.727154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.444 [2024-07-15 20:34:26.727169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.727196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.444 [2024-07-15 20:34:26.727211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.727242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.444 [2024-07-15 20:34:26.727259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.728147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.444 [2024-07-15 20:34:26.728175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.728202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.444 [2024-07-15 20:34:26.728218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.728241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.444 [2024-07-15 20:34:26.728257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.728278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.444 [2024-07-15 20:34:26.728300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.728322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.444 [2024-07-15 20:34:26.728338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.728359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.444 [2024-07-15 20:34:26.728374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.728395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.444 [2024-07-15 20:34:26.728410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.728432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.444 [2024-07-15 20:34:26.728447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.728468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.444 [2024-07-15 20:34:26.728483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.728504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.444 [2024-07-15 20:34:26.728519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.728541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.444 [2024-07-15 20:34:26.728556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.728578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.444 [2024-07-15 20:34:26.728606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.728630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.444 [2024-07-15 20:34:26.728646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.728667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.444 [2024-07-15 20:34:26.728696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:08.444 [2024-07-15 20:34:26.728720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.444 [2024-07-15 20:34:26.728735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.728757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.728772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.728794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.728809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.728831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.728846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.730676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.730708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.730737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.730754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.730776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.730792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.730814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.730829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.730851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.730880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.730906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.730927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.730957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.730974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.730995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.731011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.731049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.731086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.731133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.731170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.731206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.731243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.731280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.731316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.731353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.731389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.731436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.731473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.731510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.731547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.731584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.731621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.731658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.731695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.731732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.731768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.731805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.731841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.731905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.731943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.731965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.445 [2024-07-15 20:34:26.731981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.733972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.734015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.734043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.734060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.734082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.734097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.734119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.734134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.734155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.734171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.734193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.734208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.734229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.734245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.445 [2024-07-15 20:34:26.734266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.445 [2024-07-15 20:34:26.734281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.734302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.446 [2024-07-15 20:34:26.734318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.734339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.734368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.734392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.734408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.734430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.734445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.734466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.734482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.734503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.734519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.734541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.734556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.734577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.734593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.734614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.446 [2024-07-15 20:34:26.734630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.734651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.734666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.734687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.734702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.734723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.734739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.734760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.446 [2024-07-15 20:34:26.734775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.734796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.446 [2024-07-15 20:34:26.734820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.734842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.446 [2024-07-15 20:34:26.734858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.734893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.734911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.734932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.734948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.734969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.734985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.735006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.735021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.735042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.735057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.735090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.446 [2024-07-15 20:34:26.735105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.735127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.735142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.735164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.735180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.735201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.446 [2024-07-15 20:34:26.735216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.735238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.735254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.735276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.735291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.735719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.735745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.735773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.735790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.735812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.735827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.735849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.735864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.735902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.735918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.735940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.735956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.735977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.735993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.736014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.736030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.736051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.736066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.736088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.736103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.736126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.736142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.738849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.446 [2024-07-15 20:34:26.738894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.738937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.446 [2024-07-15 20:34:26.738956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.738984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.446 [2024-07-15 20:34:26.738999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.739021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.446 [2024-07-15 20:34:26.739037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.739058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.739074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.739095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.739111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.739132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.446 [2024-07-15 20:34:26.739147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:08.446 [2024-07-15 20:34:26.739169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.447 [2024-07-15 20:34:26.739184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.739206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.447 [2024-07-15 20:34:26.739221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.739243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.447 [2024-07-15 20:34:26.739258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.739279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.739295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.739316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.447 [2024-07-15 20:34:26.739332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.739353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.447 [2024-07-15 20:34:26.739369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.739403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.447 [2024-07-15 20:34:26.739420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.739441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.447 [2024-07-15 20:34:26.739457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.739478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.739494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.739516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.447 [2024-07-15 20:34:26.739532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.740454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.740494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.740525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.740543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.740564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.740580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.740601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.740617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.740638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.740654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.740683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.740701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.740723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.740739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.740760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.740776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.740796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.740824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.740848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.740864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.740905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.740922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.740943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.740959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.740981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.740996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.741017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.741032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.741054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.741069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.741091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.741106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.741127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.741142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.741164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.741179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.741201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.741216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.741237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.741253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.741274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.741299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.741322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.741338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.741359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.447 [2024-07-15 20:34:26.741375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.741396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.447 [2024-07-15 20:34:26.741411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.741433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.447 [2024-07-15 20:34:26.741448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.741469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.447 [2024-07-15 20:34:26.741484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.741505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.447 [2024-07-15 20:34:26.741521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.741542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.447 [2024-07-15 20:34:26.741557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.741579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.447 [2024-07-15 20:34:26.741595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.742604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.447 [2024-07-15 20:34:26.742642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.742672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.447 [2024-07-15 20:34:26.742690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.742712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.742727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.742749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.447 [2024-07-15 20:34:26.742765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.742799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.447 [2024-07-15 20:34:26.742815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.742837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.447 [2024-07-15 20:34:26.742852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:08.447 [2024-07-15 20:34:26.742888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.447 [2024-07-15 20:34:26.742907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.742929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.448 [2024-07-15 20:34:26.742944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.742965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.448 [2024-07-15 20:34:26.742981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.743002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.448 [2024-07-15 20:34:26.743018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.743038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.448 [2024-07-15 20:34:26.743054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.743075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.448 [2024-07-15 20:34:26.743090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.743112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.448 [2024-07-15 20:34:26.743127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.743148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.448 [2024-07-15 20:34:26.743163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.743184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.448 [2024-07-15 20:34:26.743200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.743221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.448 [2024-07-15 20:34:26.743236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.743266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.448 [2024-07-15 20:34:26.743283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.743305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.448 [2024-07-15 20:34:26.743320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.743342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.448 [2024-07-15 20:34:26.743358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.743380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.448 [2024-07-15 20:34:26.743395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.745213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.448 [2024-07-15 20:34:26.745251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.745280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.448 [2024-07-15 20:34:26.745297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.745320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.448 [2024-07-15 20:34:26.745335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.745356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.448 [2024-07-15 20:34:26.745371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.745392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.448 [2024-07-15 20:34:26.745408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.745429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.448 [2024-07-15 20:34:26.745444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.745465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.448 [2024-07-15 20:34:26.745480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.745501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.448 [2024-07-15 20:34:26.745517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.745538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.448 [2024-07-15 20:34:26.745566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.745589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.448 [2024-07-15 20:34:26.745605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.745626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.448 [2024-07-15 20:34:26.745642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.745663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.448 [2024-07-15 20:34:26.745678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.745699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.448 [2024-07-15 20:34:26.745715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.745736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.448 [2024-07-15 20:34:26.745751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.745772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.448 [2024-07-15 20:34:26.745787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:08.448 [2024-07-15 20:34:26.745808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.448 [2024-07-15 20:34:26.745824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.745845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.449 [2024-07-15 20:34:26.745860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.745897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.449 [2024-07-15 20:34:26.745914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.745936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.449 [2024-07-15 20:34:26.745951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.745972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.449 [2024-07-15 20:34:26.745987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.746009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.449 [2024-07-15 20:34:26.746034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.746057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.449 [2024-07-15 20:34:26.746072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.746094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.449 [2024-07-15 20:34:26.746109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.746131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.449 [2024-07-15 20:34:26.746146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.746167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.449 [2024-07-15 20:34:26.746182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.746203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.449 [2024-07-15 20:34:26.746219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.746240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.746256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.746277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.449 [2024-07-15 20:34:26.746292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.746314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.449 [2024-07-15 20:34:26.746329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.746350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.746366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.746387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.746402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.746423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.746439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.746460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.449 [2024-07-15 20:34:26.746476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.746505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.746522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.746543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.746559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.748123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.748162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.748193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.748210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.748232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.748248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.748269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.748284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.748306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.748321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.748342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.748357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.748379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.748394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.748415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.748430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.748451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.748467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.748488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.748503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.748539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.748556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.748577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.748593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.748614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.748629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.748650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.748665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.748699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.748716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.748738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.449 [2024-07-15 20:34:26.748753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.749517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.749555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.749601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.749622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.749644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.749660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.749682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.449 [2024-07-15 20:34:26.749698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.749719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.449 [2024-07-15 20:34:26.749734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.749755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.449 [2024-07-15 20:34:26.749771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.749793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.449 [2024-07-15 20:34:26.749820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.749843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.449 [2024-07-15 20:34:26.749859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.749897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.449 [2024-07-15 20:34:26.749914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:08.449 [2024-07-15 20:34:26.749936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.449 [2024-07-15 20:34:26.749951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:08.450 [2024-07-15 20:34:26.749972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.450 [2024-07-15 20:34:26.749987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:08.450 [2024-07-15 20:34:26.750008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.450 [2024-07-15 20:34:26.750024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:08.450 [2024-07-15 20:34:26.750045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.450 [2024-07-15 20:34:26.750060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:08.450 [2024-07-15 20:34:26.750081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.450 [2024-07-15 20:34:26.750097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:08.450 [2024-07-15 20:34:26.750119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.450 [2024-07-15 20:34:26.750134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:08.450 [2024-07-15 20:34:26.750155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.450 [2024-07-15 20:34:26.750170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:08.450 [2024-07-15 20:34:26.750191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.450 [2024-07-15 20:34:26.750207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:08.450 [2024-07-15 20:34:26.750228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.450 [2024-07-15 20:34:26.750244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:08.450 [2024-07-15 20:34:26.750265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.450 [2024-07-15 20:34:26.750289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:08.450 [2024-07-15 20:34:26.750311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.450 [2024-07-15 20:34:26.750327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:08.450 [2024-07-15 20:34:26.750348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:08.450 [2024-07-15 20:34:26.750364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:08.450 [2024-07-15 20:34:26.750385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.450 [2024-07-15 20:34:26.750400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:08.450 [2024-07-15 20:34:26.750421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.450 [2024-07-15 20:34:26.750437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:08.450 [2024-07-15 20:34:26.750458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.450 [2024-07-15 20:34:26.750473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:08.450 [2024-07-15 20:34:26.750494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.450 [2024-07-15 20:34:26.750509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:08.450 [2024-07-15 20:34:26.750530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.450 [2024-07-15 20:34:26.750546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:08.450 [2024-07-15 20:34:26.750567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.450 [2024-07-15 20:34:26.750582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:08.450 [2024-07-15 20:34:26.750603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.450 [2024-07-15 20:34:26.750619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:08.450 [2024-07-15 20:34:26.750641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.450 [2024-07-15 20:34:26.750660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:08.450 Received shutdown signal, test time was about 36.152962 seconds 00:18:08.450 00:18:08.450 Latency(us) 00:18:08.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.450 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:08.450 Verification LBA range: start 0x0 length 0x4000 00:18:08.450 Nvme0n1 : 36.15 8261.09 32.27 0.00 0.00 15462.38 189.91 4026531.84 00:18:08.450 =================================================================================================================== 00:18:08.450 Total : 8261.09 32.27 0.00 0.00 15462.38 189.91 4026531.84 00:18:08.450 20:34:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:08.709 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:18:08.709 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:08.967 rmmod nvme_tcp 00:18:08.967 rmmod nvme_fabrics 00:18:08.967 rmmod nvme_keyring 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 89037 ']' 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 89037 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 89037 ']' 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 89037 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 89037 00:18:08.967 killing process with pid 89037 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 89037' 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 89037 00:18:08.967 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 89037 00:18:09.226 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:09.226 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:09.226 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:09.226 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:09.226 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:09.226 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.226 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.226 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.226 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:09.226 00:18:09.226 real 0m42.459s 00:18:09.226 user 2m20.109s 00:18:09.226 sys 0m10.116s 00:18:09.226 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:09.226 20:34:30 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:09.226 ************************************ 00:18:09.226 END TEST nvmf_host_multipath_status 00:18:09.226 ************************************ 00:18:09.226 20:34:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:09.226 20:34:30 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:09.226 20:34:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:09.226 20:34:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:09.226 20:34:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:09.226 ************************************ 00:18:09.226 START TEST nvmf_discovery_remove_ifc 00:18:09.226 ************************************ 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:18:09.226 * Looking for test storage... 00:18:09.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:09.226 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:09.485 Cannot find device "nvmf_tgt_br" 00:18:09.485 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:18:09.485 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:09.485 Cannot find device "nvmf_tgt_br2" 00:18:09.485 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:18:09.485 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:09.485 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:09.485 Cannot find device "nvmf_tgt_br" 00:18:09.485 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:18:09.485 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:09.485 Cannot find device "nvmf_tgt_br2" 00:18:09.485 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:18:09.485 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:09.485 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:09.485 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:09.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:09.485 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:18:09.485 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:09.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:09.486 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:18:09.486 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:09.486 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:09.486 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:09.486 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:09.486 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:09.486 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:09.486 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:09.486 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:09.486 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:09.486 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:09.486 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:09.486 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:09.486 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:09.486 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:09.486 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:09.486 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:09.486 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:09.486 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:09.486 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:09.744 20:34:30 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:09.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:09.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:18:09.744 00:18:09.744 --- 10.0.0.2 ping statistics --- 00:18:09.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.744 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:09.744 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:09.744 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:18:09.744 00:18:09.744 --- 10.0.0.3 ping statistics --- 00:18:09.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.744 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:09.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:09.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:18:09.744 00:18:09.744 --- 10.0.0.1 ping statistics --- 00:18:09.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:09.744 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:09.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=90464 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 90464 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90464 ']' 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:09.744 20:34:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:09.744 [2024-07-15 20:34:31.129232] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:18:09.744 [2024-07-15 20:34:31.129328] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.002 [2024-07-15 20:34:31.262498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.002 [2024-07-15 20:34:31.324844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.002 [2024-07-15 20:34:31.324941] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.002 [2024-07-15 20:34:31.324954] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.002 [2024-07-15 20:34:31.324963] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.002 [2024-07-15 20:34:31.324970] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.002 [2024-07-15 20:34:31.325001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.938 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.938 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:18:10.938 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:10.938 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:10.938 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:10.938 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.938 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:18:10.938 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.938 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:10.938 [2024-07-15 20:34:32.193195] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.938 [2024-07-15 20:34:32.201312] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:10.938 null0 00:18:10.938 [2024-07-15 20:34:32.233252] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.938 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:10.938 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.938 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90515 00:18:10.938 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:18:10.938 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90515 /tmp/host.sock 00:18:10.938 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 90515 ']' 00:18:10.938 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:18:10.938 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.938 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:10.938 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.938 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:10.938 [2024-07-15 20:34:32.315762] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:18:10.938 [2024-07-15 20:34:32.315863] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90515 ] 00:18:11.197 [2024-07-15 20:34:32.454664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.197 [2024-07-15 20:34:32.525006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.197 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.197 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:18:11.197 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:11.197 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:18:11.197 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.197 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:11.197 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.197 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:18:11.197 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.197 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:11.197 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.197 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:18:11.197 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.197 20:34:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:12.573 [2024-07-15 20:34:33.646307] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:12.573 [2024-07-15 20:34:33.646348] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:12.573 [2024-07-15 20:34:33.646368] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:12.573 [2024-07-15 20:34:33.732468] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:12.573 [2024-07-15 20:34:33.789650] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:12.573 [2024-07-15 20:34:33.789733] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:12.573 [2024-07-15 20:34:33.789763] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:12.573 [2024-07-15 20:34:33.789781] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:12.573 [2024-07-15 20:34:33.789810] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:12.573 [2024-07-15 20:34:33.794972] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xca4650 was disconnected and freed. delete nvme_qpair. 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.573 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:12.574 20:34:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:13.508 20:34:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:13.508 20:34:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:13.508 20:34:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.508 20:34:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:13.508 20:34:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:13.508 20:34:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:13.508 20:34:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:13.508 20:34:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.508 20:34:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:13.508 20:34:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:14.883 20:34:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:14.883 20:34:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:14.883 20:34:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:14.883 20:34:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:14.883 20:34:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.883 20:34:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:14.883 20:34:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:14.883 20:34:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.883 20:34:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:14.883 20:34:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:15.817 20:34:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:15.817 20:34:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:15.817 20:34:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:15.817 20:34:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:15.817 20:34:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.817 20:34:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:15.817 20:34:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:15.817 20:34:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.817 20:34:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:15.817 20:34:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:16.751 20:34:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:16.751 20:34:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:16.751 20:34:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:16.751 20:34:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.751 20:34:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:16.751 20:34:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:16.751 20:34:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:16.751 20:34:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.751 20:34:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:16.752 20:34:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:18.127 20:34:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:18.127 20:34:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:18.127 20:34:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:18.127 20:34:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.128 20:34:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:18.128 20:34:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:18.128 20:34:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:18.128 20:34:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.128 [2024-07-15 20:34:39.217495] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:18:18.128 [2024-07-15 20:34:39.217561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.128 [2024-07-15 20:34:39.217578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.128 [2024-07-15 20:34:39.217591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.128 [2024-07-15 20:34:39.217600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.128 [2024-07-15 20:34:39.217610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.128 [2024-07-15 20:34:39.217619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.128 [2024-07-15 20:34:39.217630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.128 [2024-07-15 20:34:39.217639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.128 [2024-07-15 20:34:39.217649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:18.128 [2024-07-15 20:34:39.217658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.128 [2024-07-15 20:34:39.217667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6d900 is same with the state(5) to be set 00:18:18.128 [2024-07-15 20:34:39.227490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6d900 (9): Bad file descriptor 00:18:18.128 [2024-07-15 20:34:39.237517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:18.128 20:34:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:18.128 20:34:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:19.062 20:34:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:19.062 20:34:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:19.062 20:34:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.062 20:34:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:19.062 20:34:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:19.062 20:34:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:19.062 20:34:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:19.062 [2024-07-15 20:34:40.285954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:18:19.062 [2024-07-15 20:34:40.286081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc6d900 with addr=10.0.0.2, port=4420 00:18:19.062 [2024-07-15 20:34:40.286115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc6d900 is same with the state(5) to be set 00:18:19.062 [2024-07-15 20:34:40.286180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc6d900 (9): Bad file descriptor 00:18:19.062 [2024-07-15 20:34:40.287067] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:19.062 [2024-07-15 20:34:40.287144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:19.062 [2024-07-15 20:34:40.287166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:19.062 [2024-07-15 20:34:40.287187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:19.062 [2024-07-15 20:34:40.287227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:19.062 [2024-07-15 20:34:40.287248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:19.062 20:34:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.062 20:34:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:18:19.062 20:34:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:19.996 [2024-07-15 20:34:41.287309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:19.996 [2024-07-15 20:34:41.287379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:19.996 [2024-07-15 20:34:41.287392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:19.996 [2024-07-15 20:34:41.287403] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:18:19.996 [2024-07-15 20:34:41.287425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:19.996 [2024-07-15 20:34:41.287463] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:18:19.996 [2024-07-15 20:34:41.287524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.996 [2024-07-15 20:34:41.287541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.996 [2024-07-15 20:34:41.287555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.996 [2024-07-15 20:34:41.287564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.996 [2024-07-15 20:34:41.287575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.996 [2024-07-15 20:34:41.287584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.996 [2024-07-15 20:34:41.287594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.996 [2024-07-15 20:34:41.287603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.996 [2024-07-15 20:34:41.287614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:19.996 [2024-07-15 20:34:41.287623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:19.996 [2024-07-15 20:34:41.287632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:18:19.996 [2024-07-15 20:34:41.287985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc103e0 (9): Bad file descriptor 00:18:19.996 [2024-07-15 20:34:41.288998] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:18:19.996 [2024-07-15 20:34:41.289042] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:19.996 20:34:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:21.372 20:34:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:21.372 20:34:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:21.372 20:34:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.372 20:34:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:21.372 20:34:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:21.372 20:34:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:21.372 20:34:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:21.372 20:34:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.372 20:34:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:18:21.372 20:34:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:18:21.941 [2024-07-15 20:34:43.298115] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:21.941 [2024-07-15 20:34:43.298160] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:21.941 [2024-07-15 20:34:43.298183] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:21.941 [2024-07-15 20:34:43.384287] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:18:21.941 [2024-07-15 20:34:43.440604] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:18:21.941 [2024-07-15 20:34:43.440699] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:18:21.941 [2024-07-15 20:34:43.440737] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:18:22.201 [2024-07-15 20:34:43.440766] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:18:22.201 [2024-07-15 20:34:43.440779] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:22.201 [2024-07-15 20:34:43.446640] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc89300 was disconnected and freed. delete nvme_qpair. 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90515 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90515 ']' 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90515 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90515 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:22.201 killing process with pid 90515 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90515' 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90515 00:18:22.201 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90515 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:22.459 rmmod nvme_tcp 00:18:22.459 rmmod nvme_fabrics 00:18:22.459 rmmod nvme_keyring 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 90464 ']' 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 90464 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 90464 ']' 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 90464 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 90464 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:22.459 killing process with pid 90464 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 90464' 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 90464 00:18:22.459 20:34:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 90464 00:18:22.717 20:34:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:22.717 20:34:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:22.717 20:34:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:22.717 20:34:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:22.717 20:34:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:22.717 20:34:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.717 20:34:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.717 20:34:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.717 20:34:44 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:22.717 00:18:22.717 real 0m13.503s 00:18:22.717 user 0m23.973s 00:18:22.717 sys 0m1.492s 00:18:22.717 20:34:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:22.717 20:34:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:18:22.717 ************************************ 00:18:22.717 END TEST nvmf_discovery_remove_ifc 00:18:22.717 ************************************ 00:18:22.717 20:34:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:22.717 20:34:44 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:22.717 20:34:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:22.717 20:34:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:22.717 20:34:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:22.717 ************************************ 00:18:22.717 START TEST nvmf_identify_kernel_target 00:18:22.717 ************************************ 00:18:22.717 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:18:22.717 * Looking for test storage... 00:18:22.976 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.976 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:22.977 Cannot find device "nvmf_tgt_br" 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:22.977 Cannot find device "nvmf_tgt_br2" 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:22.977 Cannot find device "nvmf_tgt_br" 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:22.977 Cannot find device "nvmf_tgt_br2" 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:22.977 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:22.977 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:22.977 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:23.236 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:23.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:23.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:18:23.237 00:18:23.237 --- 10.0.0.2 ping statistics --- 00:18:23.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.237 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:23.237 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:23.237 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:18:23.237 00:18:23.237 --- 10.0.0.3 ping statistics --- 00:18:23.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.237 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:23.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:23.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:18:23.237 00:18:23.237 --- 10.0.0.1 ping statistics --- 00:18:23.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.237 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:23.237 20:34:44 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:23.496 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:23.496 Waiting for block devices as requested 00:18:23.754 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:23.754 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:23.754 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:23.754 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:23.754 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:18:23.754 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:18:23.754 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:23.754 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:23.754 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:18:23.754 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:23.754 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:24.013 No valid GPT data, bailing 00:18:24.013 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:24.013 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:24.013 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:24.013 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:18:24.013 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:24.013 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:24.013 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:18:24.013 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:18:24.013 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:24.013 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:24.013 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:18:24.013 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:24.014 No valid GPT data, bailing 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:24.014 No valid GPT data, bailing 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:18:24.014 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:24.014 No valid GPT data, bailing 00:18:24.273 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:24.273 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:18:24.273 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:18:24.273 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:18:24.273 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:18:24.274 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:24.274 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:24.274 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:24.274 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:18:24.274 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:18:24.274 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:18:24.274 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:18:24.274 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:18:24.274 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:18:24.274 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:18:24.274 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:18:24.274 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:24.274 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -a 10.0.0.1 -t tcp -s 4420 00:18:24.274 00:18:24.274 Discovery Log Number of Records 2, Generation counter 2 00:18:24.274 =====Discovery Log Entry 0====== 00:18:24.274 trtype: tcp 00:18:24.274 adrfam: ipv4 00:18:24.274 subtype: current discovery subsystem 00:18:24.274 treq: not specified, sq flow control disable supported 00:18:24.274 portid: 1 00:18:24.274 trsvcid: 4420 00:18:24.274 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:24.274 traddr: 10.0.0.1 00:18:24.274 eflags: none 00:18:24.274 sectype: none 00:18:24.274 =====Discovery Log Entry 1====== 00:18:24.274 trtype: tcp 00:18:24.274 adrfam: ipv4 00:18:24.274 subtype: nvme subsystem 00:18:24.274 treq: not specified, sq flow control disable supported 00:18:24.274 portid: 1 00:18:24.274 trsvcid: 4420 00:18:24.274 subnqn: nqn.2016-06.io.spdk:testnqn 00:18:24.274 traddr: 10.0.0.1 00:18:24.274 eflags: none 00:18:24.274 sectype: none 00:18:24.274 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:18:24.274 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:18:24.274 ===================================================== 00:18:24.274 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:24.274 ===================================================== 00:18:24.274 Controller Capabilities/Features 00:18:24.274 ================================ 00:18:24.274 Vendor ID: 0000 00:18:24.274 Subsystem Vendor ID: 0000 00:18:24.274 Serial Number: a992d9e14b600fb3a549 00:18:24.274 Model Number: Linux 00:18:24.274 Firmware Version: 6.7.0-68 00:18:24.274 Recommended Arb Burst: 0 00:18:24.274 IEEE OUI Identifier: 00 00 00 00:18:24.274 Multi-path I/O 00:18:24.274 May have multiple subsystem ports: No 00:18:24.274 May have multiple controllers: No 00:18:24.274 Associated with SR-IOV VF: No 00:18:24.274 Max Data Transfer Size: Unlimited 00:18:24.274 Max Number of Namespaces: 0 00:18:24.274 Max Number of I/O Queues: 1024 00:18:24.274 NVMe Specification Version (VS): 1.3 00:18:24.274 NVMe Specification Version (Identify): 1.3 00:18:24.274 Maximum Queue Entries: 1024 00:18:24.274 Contiguous Queues Required: No 00:18:24.274 Arbitration Mechanisms Supported 00:18:24.274 Weighted Round Robin: Not Supported 00:18:24.274 Vendor Specific: Not Supported 00:18:24.274 Reset Timeout: 7500 ms 00:18:24.274 Doorbell Stride: 4 bytes 00:18:24.274 NVM Subsystem Reset: Not Supported 00:18:24.274 Command Sets Supported 00:18:24.274 NVM Command Set: Supported 00:18:24.274 Boot Partition: Not Supported 00:18:24.274 Memory Page Size Minimum: 4096 bytes 00:18:24.274 Memory Page Size Maximum: 4096 bytes 00:18:24.274 Persistent Memory Region: Not Supported 00:18:24.274 Optional Asynchronous Events Supported 00:18:24.274 Namespace Attribute Notices: Not Supported 00:18:24.274 Firmware Activation Notices: Not Supported 00:18:24.274 ANA Change Notices: Not Supported 00:18:24.274 PLE Aggregate Log Change Notices: Not Supported 00:18:24.274 LBA Status Info Alert Notices: Not Supported 00:18:24.274 EGE Aggregate Log Change Notices: Not Supported 00:18:24.274 Normal NVM Subsystem Shutdown event: Not Supported 00:18:24.274 Zone Descriptor Change Notices: Not Supported 00:18:24.274 Discovery Log Change Notices: Supported 00:18:24.274 Controller Attributes 00:18:24.274 128-bit Host Identifier: Not Supported 00:18:24.274 Non-Operational Permissive Mode: Not Supported 00:18:24.274 NVM Sets: Not Supported 00:18:24.274 Read Recovery Levels: Not Supported 00:18:24.274 Endurance Groups: Not Supported 00:18:24.274 Predictable Latency Mode: Not Supported 00:18:24.274 Traffic Based Keep ALive: Not Supported 00:18:24.274 Namespace Granularity: Not Supported 00:18:24.274 SQ Associations: Not Supported 00:18:24.274 UUID List: Not Supported 00:18:24.274 Multi-Domain Subsystem: Not Supported 00:18:24.274 Fixed Capacity Management: Not Supported 00:18:24.274 Variable Capacity Management: Not Supported 00:18:24.274 Delete Endurance Group: Not Supported 00:18:24.274 Delete NVM Set: Not Supported 00:18:24.274 Extended LBA Formats Supported: Not Supported 00:18:24.274 Flexible Data Placement Supported: Not Supported 00:18:24.274 00:18:24.274 Controller Memory Buffer Support 00:18:24.274 ================================ 00:18:24.274 Supported: No 00:18:24.274 00:18:24.274 Persistent Memory Region Support 00:18:24.274 ================================ 00:18:24.274 Supported: No 00:18:24.274 00:18:24.274 Admin Command Set Attributes 00:18:24.274 ============================ 00:18:24.274 Security Send/Receive: Not Supported 00:18:24.274 Format NVM: Not Supported 00:18:24.274 Firmware Activate/Download: Not Supported 00:18:24.274 Namespace Management: Not Supported 00:18:24.274 Device Self-Test: Not Supported 00:18:24.274 Directives: Not Supported 00:18:24.274 NVMe-MI: Not Supported 00:18:24.274 Virtualization Management: Not Supported 00:18:24.274 Doorbell Buffer Config: Not Supported 00:18:24.274 Get LBA Status Capability: Not Supported 00:18:24.274 Command & Feature Lockdown Capability: Not Supported 00:18:24.274 Abort Command Limit: 1 00:18:24.274 Async Event Request Limit: 1 00:18:24.274 Number of Firmware Slots: N/A 00:18:24.274 Firmware Slot 1 Read-Only: N/A 00:18:24.274 Firmware Activation Without Reset: N/A 00:18:24.274 Multiple Update Detection Support: N/A 00:18:24.274 Firmware Update Granularity: No Information Provided 00:18:24.274 Per-Namespace SMART Log: No 00:18:24.274 Asymmetric Namespace Access Log Page: Not Supported 00:18:24.274 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:24.274 Command Effects Log Page: Not Supported 00:18:24.274 Get Log Page Extended Data: Supported 00:18:24.274 Telemetry Log Pages: Not Supported 00:18:24.274 Persistent Event Log Pages: Not Supported 00:18:24.274 Supported Log Pages Log Page: May Support 00:18:24.274 Commands Supported & Effects Log Page: Not Supported 00:18:24.274 Feature Identifiers & Effects Log Page:May Support 00:18:24.274 NVMe-MI Commands & Effects Log Page: May Support 00:18:24.274 Data Area 4 for Telemetry Log: Not Supported 00:18:24.274 Error Log Page Entries Supported: 1 00:18:24.274 Keep Alive: Not Supported 00:18:24.274 00:18:24.274 NVM Command Set Attributes 00:18:24.274 ========================== 00:18:24.274 Submission Queue Entry Size 00:18:24.274 Max: 1 00:18:24.274 Min: 1 00:18:24.274 Completion Queue Entry Size 00:18:24.274 Max: 1 00:18:24.274 Min: 1 00:18:24.274 Number of Namespaces: 0 00:18:24.274 Compare Command: Not Supported 00:18:24.274 Write Uncorrectable Command: Not Supported 00:18:24.274 Dataset Management Command: Not Supported 00:18:24.274 Write Zeroes Command: Not Supported 00:18:24.274 Set Features Save Field: Not Supported 00:18:24.274 Reservations: Not Supported 00:18:24.274 Timestamp: Not Supported 00:18:24.274 Copy: Not Supported 00:18:24.274 Volatile Write Cache: Not Present 00:18:24.274 Atomic Write Unit (Normal): 1 00:18:24.274 Atomic Write Unit (PFail): 1 00:18:24.274 Atomic Compare & Write Unit: 1 00:18:24.274 Fused Compare & Write: Not Supported 00:18:24.274 Scatter-Gather List 00:18:24.274 SGL Command Set: Supported 00:18:24.274 SGL Keyed: Not Supported 00:18:24.274 SGL Bit Bucket Descriptor: Not Supported 00:18:24.274 SGL Metadata Pointer: Not Supported 00:18:24.274 Oversized SGL: Not Supported 00:18:24.274 SGL Metadata Address: Not Supported 00:18:24.274 SGL Offset: Supported 00:18:24.274 Transport SGL Data Block: Not Supported 00:18:24.274 Replay Protected Memory Block: Not Supported 00:18:24.274 00:18:24.274 Firmware Slot Information 00:18:24.274 ========================= 00:18:24.274 Active slot: 0 00:18:24.274 00:18:24.274 00:18:24.274 Error Log 00:18:24.274 ========= 00:18:24.274 00:18:24.274 Active Namespaces 00:18:24.274 ================= 00:18:24.274 Discovery Log Page 00:18:24.274 ================== 00:18:24.274 Generation Counter: 2 00:18:24.274 Number of Records: 2 00:18:24.274 Record Format: 0 00:18:24.274 00:18:24.274 Discovery Log Entry 0 00:18:24.274 ---------------------- 00:18:24.274 Transport Type: 3 (TCP) 00:18:24.274 Address Family: 1 (IPv4) 00:18:24.274 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:24.274 Entry Flags: 00:18:24.274 Duplicate Returned Information: 0 00:18:24.274 Explicit Persistent Connection Support for Discovery: 0 00:18:24.274 Transport Requirements: 00:18:24.274 Secure Channel: Not Specified 00:18:24.274 Port ID: 1 (0x0001) 00:18:24.275 Controller ID: 65535 (0xffff) 00:18:24.275 Admin Max SQ Size: 32 00:18:24.275 Transport Service Identifier: 4420 00:18:24.275 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:24.275 Transport Address: 10.0.0.1 00:18:24.275 Discovery Log Entry 1 00:18:24.275 ---------------------- 00:18:24.275 Transport Type: 3 (TCP) 00:18:24.275 Address Family: 1 (IPv4) 00:18:24.275 Subsystem Type: 2 (NVM Subsystem) 00:18:24.275 Entry Flags: 00:18:24.275 Duplicate Returned Information: 0 00:18:24.275 Explicit Persistent Connection Support for Discovery: 0 00:18:24.275 Transport Requirements: 00:18:24.275 Secure Channel: Not Specified 00:18:24.275 Port ID: 1 (0x0001) 00:18:24.275 Controller ID: 65535 (0xffff) 00:18:24.275 Admin Max SQ Size: 32 00:18:24.275 Transport Service Identifier: 4420 00:18:24.275 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:18:24.275 Transport Address: 10.0.0.1 00:18:24.275 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:18:24.534 get_feature(0x01) failed 00:18:24.534 get_feature(0x02) failed 00:18:24.534 get_feature(0x04) failed 00:18:24.534 ===================================================== 00:18:24.534 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:18:24.534 ===================================================== 00:18:24.534 Controller Capabilities/Features 00:18:24.534 ================================ 00:18:24.534 Vendor ID: 0000 00:18:24.534 Subsystem Vendor ID: 0000 00:18:24.534 Serial Number: 802c8cf2fc3ec85b96b9 00:18:24.534 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:18:24.534 Firmware Version: 6.7.0-68 00:18:24.534 Recommended Arb Burst: 6 00:18:24.534 IEEE OUI Identifier: 00 00 00 00:18:24.534 Multi-path I/O 00:18:24.534 May have multiple subsystem ports: Yes 00:18:24.534 May have multiple controllers: Yes 00:18:24.534 Associated with SR-IOV VF: No 00:18:24.534 Max Data Transfer Size: Unlimited 00:18:24.534 Max Number of Namespaces: 1024 00:18:24.534 Max Number of I/O Queues: 128 00:18:24.534 NVMe Specification Version (VS): 1.3 00:18:24.534 NVMe Specification Version (Identify): 1.3 00:18:24.534 Maximum Queue Entries: 1024 00:18:24.534 Contiguous Queues Required: No 00:18:24.534 Arbitration Mechanisms Supported 00:18:24.534 Weighted Round Robin: Not Supported 00:18:24.534 Vendor Specific: Not Supported 00:18:24.534 Reset Timeout: 7500 ms 00:18:24.534 Doorbell Stride: 4 bytes 00:18:24.534 NVM Subsystem Reset: Not Supported 00:18:24.534 Command Sets Supported 00:18:24.534 NVM Command Set: Supported 00:18:24.534 Boot Partition: Not Supported 00:18:24.534 Memory Page Size Minimum: 4096 bytes 00:18:24.534 Memory Page Size Maximum: 4096 bytes 00:18:24.534 Persistent Memory Region: Not Supported 00:18:24.534 Optional Asynchronous Events Supported 00:18:24.534 Namespace Attribute Notices: Supported 00:18:24.534 Firmware Activation Notices: Not Supported 00:18:24.534 ANA Change Notices: Supported 00:18:24.534 PLE Aggregate Log Change Notices: Not Supported 00:18:24.534 LBA Status Info Alert Notices: Not Supported 00:18:24.534 EGE Aggregate Log Change Notices: Not Supported 00:18:24.534 Normal NVM Subsystem Shutdown event: Not Supported 00:18:24.534 Zone Descriptor Change Notices: Not Supported 00:18:24.534 Discovery Log Change Notices: Not Supported 00:18:24.534 Controller Attributes 00:18:24.534 128-bit Host Identifier: Supported 00:18:24.534 Non-Operational Permissive Mode: Not Supported 00:18:24.534 NVM Sets: Not Supported 00:18:24.534 Read Recovery Levels: Not Supported 00:18:24.534 Endurance Groups: Not Supported 00:18:24.534 Predictable Latency Mode: Not Supported 00:18:24.534 Traffic Based Keep ALive: Supported 00:18:24.534 Namespace Granularity: Not Supported 00:18:24.534 SQ Associations: Not Supported 00:18:24.534 UUID List: Not Supported 00:18:24.534 Multi-Domain Subsystem: Not Supported 00:18:24.534 Fixed Capacity Management: Not Supported 00:18:24.534 Variable Capacity Management: Not Supported 00:18:24.534 Delete Endurance Group: Not Supported 00:18:24.534 Delete NVM Set: Not Supported 00:18:24.534 Extended LBA Formats Supported: Not Supported 00:18:24.534 Flexible Data Placement Supported: Not Supported 00:18:24.534 00:18:24.534 Controller Memory Buffer Support 00:18:24.534 ================================ 00:18:24.534 Supported: No 00:18:24.534 00:18:24.534 Persistent Memory Region Support 00:18:24.534 ================================ 00:18:24.534 Supported: No 00:18:24.534 00:18:24.534 Admin Command Set Attributes 00:18:24.534 ============================ 00:18:24.534 Security Send/Receive: Not Supported 00:18:24.534 Format NVM: Not Supported 00:18:24.534 Firmware Activate/Download: Not Supported 00:18:24.534 Namespace Management: Not Supported 00:18:24.534 Device Self-Test: Not Supported 00:18:24.534 Directives: Not Supported 00:18:24.534 NVMe-MI: Not Supported 00:18:24.534 Virtualization Management: Not Supported 00:18:24.534 Doorbell Buffer Config: Not Supported 00:18:24.534 Get LBA Status Capability: Not Supported 00:18:24.534 Command & Feature Lockdown Capability: Not Supported 00:18:24.534 Abort Command Limit: 4 00:18:24.534 Async Event Request Limit: 4 00:18:24.534 Number of Firmware Slots: N/A 00:18:24.534 Firmware Slot 1 Read-Only: N/A 00:18:24.534 Firmware Activation Without Reset: N/A 00:18:24.534 Multiple Update Detection Support: N/A 00:18:24.534 Firmware Update Granularity: No Information Provided 00:18:24.534 Per-Namespace SMART Log: Yes 00:18:24.534 Asymmetric Namespace Access Log Page: Supported 00:18:24.534 ANA Transition Time : 10 sec 00:18:24.534 00:18:24.534 Asymmetric Namespace Access Capabilities 00:18:24.534 ANA Optimized State : Supported 00:18:24.534 ANA Non-Optimized State : Supported 00:18:24.534 ANA Inaccessible State : Supported 00:18:24.534 ANA Persistent Loss State : Supported 00:18:24.534 ANA Change State : Supported 00:18:24.534 ANAGRPID is not changed : No 00:18:24.534 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:18:24.534 00:18:24.534 ANA Group Identifier Maximum : 128 00:18:24.534 Number of ANA Group Identifiers : 128 00:18:24.534 Max Number of Allowed Namespaces : 1024 00:18:24.534 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:18:24.534 Command Effects Log Page: Supported 00:18:24.534 Get Log Page Extended Data: Supported 00:18:24.534 Telemetry Log Pages: Not Supported 00:18:24.534 Persistent Event Log Pages: Not Supported 00:18:24.534 Supported Log Pages Log Page: May Support 00:18:24.534 Commands Supported & Effects Log Page: Not Supported 00:18:24.534 Feature Identifiers & Effects Log Page:May Support 00:18:24.534 NVMe-MI Commands & Effects Log Page: May Support 00:18:24.534 Data Area 4 for Telemetry Log: Not Supported 00:18:24.534 Error Log Page Entries Supported: 128 00:18:24.534 Keep Alive: Supported 00:18:24.535 Keep Alive Granularity: 1000 ms 00:18:24.535 00:18:24.535 NVM Command Set Attributes 00:18:24.535 ========================== 00:18:24.535 Submission Queue Entry Size 00:18:24.535 Max: 64 00:18:24.535 Min: 64 00:18:24.535 Completion Queue Entry Size 00:18:24.535 Max: 16 00:18:24.535 Min: 16 00:18:24.535 Number of Namespaces: 1024 00:18:24.535 Compare Command: Not Supported 00:18:24.535 Write Uncorrectable Command: Not Supported 00:18:24.535 Dataset Management Command: Supported 00:18:24.535 Write Zeroes Command: Supported 00:18:24.535 Set Features Save Field: Not Supported 00:18:24.535 Reservations: Not Supported 00:18:24.535 Timestamp: Not Supported 00:18:24.535 Copy: Not Supported 00:18:24.535 Volatile Write Cache: Present 00:18:24.535 Atomic Write Unit (Normal): 1 00:18:24.535 Atomic Write Unit (PFail): 1 00:18:24.535 Atomic Compare & Write Unit: 1 00:18:24.535 Fused Compare & Write: Not Supported 00:18:24.535 Scatter-Gather List 00:18:24.535 SGL Command Set: Supported 00:18:24.535 SGL Keyed: Not Supported 00:18:24.535 SGL Bit Bucket Descriptor: Not Supported 00:18:24.535 SGL Metadata Pointer: Not Supported 00:18:24.535 Oversized SGL: Not Supported 00:18:24.535 SGL Metadata Address: Not Supported 00:18:24.535 SGL Offset: Supported 00:18:24.535 Transport SGL Data Block: Not Supported 00:18:24.535 Replay Protected Memory Block: Not Supported 00:18:24.535 00:18:24.535 Firmware Slot Information 00:18:24.535 ========================= 00:18:24.535 Active slot: 0 00:18:24.535 00:18:24.535 Asymmetric Namespace Access 00:18:24.535 =========================== 00:18:24.535 Change Count : 0 00:18:24.535 Number of ANA Group Descriptors : 1 00:18:24.535 ANA Group Descriptor : 0 00:18:24.535 ANA Group ID : 1 00:18:24.535 Number of NSID Values : 1 00:18:24.535 Change Count : 0 00:18:24.535 ANA State : 1 00:18:24.535 Namespace Identifier : 1 00:18:24.535 00:18:24.535 Commands Supported and Effects 00:18:24.535 ============================== 00:18:24.535 Admin Commands 00:18:24.535 -------------- 00:18:24.535 Get Log Page (02h): Supported 00:18:24.535 Identify (06h): Supported 00:18:24.535 Abort (08h): Supported 00:18:24.535 Set Features (09h): Supported 00:18:24.535 Get Features (0Ah): Supported 00:18:24.535 Asynchronous Event Request (0Ch): Supported 00:18:24.535 Keep Alive (18h): Supported 00:18:24.535 I/O Commands 00:18:24.535 ------------ 00:18:24.535 Flush (00h): Supported 00:18:24.535 Write (01h): Supported LBA-Change 00:18:24.535 Read (02h): Supported 00:18:24.535 Write Zeroes (08h): Supported LBA-Change 00:18:24.535 Dataset Management (09h): Supported 00:18:24.535 00:18:24.535 Error Log 00:18:24.535 ========= 00:18:24.535 Entry: 0 00:18:24.535 Error Count: 0x3 00:18:24.535 Submission Queue Id: 0x0 00:18:24.535 Command Id: 0x5 00:18:24.535 Phase Bit: 0 00:18:24.535 Status Code: 0x2 00:18:24.535 Status Code Type: 0x0 00:18:24.535 Do Not Retry: 1 00:18:24.535 Error Location: 0x28 00:18:24.535 LBA: 0x0 00:18:24.535 Namespace: 0x0 00:18:24.535 Vendor Log Page: 0x0 00:18:24.535 ----------- 00:18:24.535 Entry: 1 00:18:24.535 Error Count: 0x2 00:18:24.535 Submission Queue Id: 0x0 00:18:24.535 Command Id: 0x5 00:18:24.535 Phase Bit: 0 00:18:24.535 Status Code: 0x2 00:18:24.535 Status Code Type: 0x0 00:18:24.535 Do Not Retry: 1 00:18:24.535 Error Location: 0x28 00:18:24.535 LBA: 0x0 00:18:24.535 Namespace: 0x0 00:18:24.535 Vendor Log Page: 0x0 00:18:24.535 ----------- 00:18:24.535 Entry: 2 00:18:24.535 Error Count: 0x1 00:18:24.535 Submission Queue Id: 0x0 00:18:24.535 Command Id: 0x4 00:18:24.535 Phase Bit: 0 00:18:24.535 Status Code: 0x2 00:18:24.535 Status Code Type: 0x0 00:18:24.535 Do Not Retry: 1 00:18:24.535 Error Location: 0x28 00:18:24.535 LBA: 0x0 00:18:24.535 Namespace: 0x0 00:18:24.535 Vendor Log Page: 0x0 00:18:24.535 00:18:24.535 Number of Queues 00:18:24.535 ================ 00:18:24.535 Number of I/O Submission Queues: 128 00:18:24.535 Number of I/O Completion Queues: 128 00:18:24.535 00:18:24.535 ZNS Specific Controller Data 00:18:24.535 ============================ 00:18:24.535 Zone Append Size Limit: 0 00:18:24.535 00:18:24.535 00:18:24.535 Active Namespaces 00:18:24.535 ================= 00:18:24.535 get_feature(0x05) failed 00:18:24.535 Namespace ID:1 00:18:24.535 Command Set Identifier: NVM (00h) 00:18:24.535 Deallocate: Supported 00:18:24.535 Deallocated/Unwritten Error: Not Supported 00:18:24.535 Deallocated Read Value: Unknown 00:18:24.535 Deallocate in Write Zeroes: Not Supported 00:18:24.535 Deallocated Guard Field: 0xFFFF 00:18:24.535 Flush: Supported 00:18:24.535 Reservation: Not Supported 00:18:24.535 Namespace Sharing Capabilities: Multiple Controllers 00:18:24.535 Size (in LBAs): 1310720 (5GiB) 00:18:24.535 Capacity (in LBAs): 1310720 (5GiB) 00:18:24.535 Utilization (in LBAs): 1310720 (5GiB) 00:18:24.535 UUID: 01c0cc99-212d-4861-b9d7-6c9d6c70bcad 00:18:24.535 Thin Provisioning: Not Supported 00:18:24.535 Per-NS Atomic Units: Yes 00:18:24.535 Atomic Boundary Size (Normal): 0 00:18:24.535 Atomic Boundary Size (PFail): 0 00:18:24.535 Atomic Boundary Offset: 0 00:18:24.535 NGUID/EUI64 Never Reused: No 00:18:24.535 ANA group ID: 1 00:18:24.535 Namespace Write Protected: No 00:18:24.535 Number of LBA Formats: 1 00:18:24.535 Current LBA Format: LBA Format #00 00:18:24.535 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:18:24.535 00:18:24.535 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:18:24.535 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:24.535 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:18:24.535 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:24.535 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:18:24.535 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:24.535 20:34:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:24.535 rmmod nvme_tcp 00:18:24.535 rmmod nvme_fabrics 00:18:24.535 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:24.535 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:18:24.535 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:18:24.535 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:24.535 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:24.535 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:24.535 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:24.535 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:24.535 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:24.535 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.535 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.535 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.793 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:24.793 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:18:24.793 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:18:24.793 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:18:24.793 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:24.793 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:18:24.793 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:18:24.793 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:18:24.793 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:18:24.793 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:18:24.793 20:34:46 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:25.359 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:25.617 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:25.617 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:25.617 00:18:25.617 real 0m2.868s 00:18:25.617 user 0m1.006s 00:18:25.617 sys 0m1.346s 00:18:25.617 20:34:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:25.617 20:34:47 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.617 ************************************ 00:18:25.617 END TEST nvmf_identify_kernel_target 00:18:25.617 ************************************ 00:18:25.617 20:34:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:25.617 20:34:47 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:25.617 20:34:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:25.617 20:34:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:25.617 20:34:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:25.617 ************************************ 00:18:25.617 START TEST nvmf_auth_host 00:18:25.617 ************************************ 00:18:25.617 20:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:18:25.876 * Looking for test storage... 00:18:25.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:25.876 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:25.877 Cannot find device "nvmf_tgt_br" 00:18:25.877 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:18:25.877 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:25.877 Cannot find device "nvmf_tgt_br2" 00:18:25.877 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:18:25.877 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:25.877 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:25.877 Cannot find device "nvmf_tgt_br" 00:18:25.877 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:18:25.877 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:25.877 Cannot find device "nvmf_tgt_br2" 00:18:25.877 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:18:25.877 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:25.877 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:25.877 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:25.877 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:25.877 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:18:25.877 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:25.877 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:25.877 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:18:25.877 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:25.877 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:25.877 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:25.877 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:25.877 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:25.877 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:26.135 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:26.135 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:26.135 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:26.135 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:26.135 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:26.135 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:26.135 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:26.135 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:26.135 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:26.135 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:26.135 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:26.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:18:26.136 00:18:26.136 --- 10.0.0.2 ping statistics --- 00:18:26.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.136 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:26.136 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:26.136 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:18:26.136 00:18:26.136 --- 10.0.0.3 ping statistics --- 00:18:26.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.136 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:26.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:26.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:18:26.136 00:18:26.136 --- 10.0.0.1 ping statistics --- 00:18:26.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.136 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=91386 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 91386 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91386 ']' 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:26.136 20:34:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.510 20:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ea0f8aa07660a6975b081b82676c1078 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.SxN 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ea0f8aa07660a6975b081b82676c1078 0 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ea0f8aa07660a6975b081b82676c1078 0 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ea0f8aa07660a6975b081b82676c1078 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.SxN 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.SxN 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.SxN 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6ba743deb7b2685e81c3bee44c812bdeeda7718a61a8bb85f79d9c2b32ffbcd0 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.N8t 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6ba743deb7b2685e81c3bee44c812bdeeda7718a61a8bb85f79d9c2b32ffbcd0 3 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6ba743deb7b2685e81c3bee44c812bdeeda7718a61a8bb85f79d9c2b32ffbcd0 3 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6ba743deb7b2685e81c3bee44c812bdeeda7718a61a8bb85f79d9c2b32ffbcd0 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.N8t 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.N8t 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.N8t 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0405cc9d30134fe43bd61432250a48bd908dff99973fa362 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.yag 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0405cc9d30134fe43bd61432250a48bd908dff99973fa362 0 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0405cc9d30134fe43bd61432250a48bd908dff99973fa362 0 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0405cc9d30134fe43bd61432250a48bd908dff99973fa362 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.yag 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.yag 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.yag 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e5a15c508279e43040e6d43988be4a1e59293ac417db927c 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.oLD 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e5a15c508279e43040e6d43988be4a1e59293ac417db927c 2 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e5a15c508279e43040e6d43988be4a1e59293ac417db927c 2 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e5a15c508279e43040e6d43988be4a1e59293ac417db927c 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.oLD 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.oLD 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.oLD 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=be6177009622464fc237a3426196475e 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Fec 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key be6177009622464fc237a3426196475e 1 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 be6177009622464fc237a3426196475e 1 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=be6177009622464fc237a3426196475e 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Fec 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Fec 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Fec 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8d5f60dd1c78f3d54ee8aecf24b13476 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.BGw 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8d5f60dd1c78f3d54ee8aecf24b13476 1 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8d5f60dd1c78f3d54ee8aecf24b13476 1 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8d5f60dd1c78f3d54ee8aecf24b13476 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:18:27.511 20:34:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.BGw 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.BGw 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.BGw 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=abb174f077364ec4c96659b615f84b6cc331ad195c8507bf 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ttP 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key abb174f077364ec4c96659b615f84b6cc331ad195c8507bf 2 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 abb174f077364ec4c96659b615f84b6cc331ad195c8507bf 2 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=abb174f077364ec4c96659b615f84b6cc331ad195c8507bf 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ttP 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ttP 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ttP 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=99482f2a4d5a1dd5184dfbe621fb2165 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fEm 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 99482f2a4d5a1dd5184dfbe621fb2165 0 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 99482f2a4d5a1dd5184dfbe621fb2165 0 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=99482f2a4d5a1dd5184dfbe621fb2165 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fEm 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fEm 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.fEm 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eca8c33ea912ce898373fe576da300dd3cd66478fc47a385a2abaf7316e507bd 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.xXp 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eca8c33ea912ce898373fe576da300dd3cd66478fc47a385a2abaf7316e507bd 3 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eca8c33ea912ce898373fe576da300dd3cd66478fc47a385a2abaf7316e507bd 3 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eca8c33ea912ce898373fe576da300dd3cd66478fc47a385a2abaf7316e507bd 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.xXp 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.xXp 00:18:27.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.xXp 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91386 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 91386 ']' 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.769 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.SxN 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.N8t ]] 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.N8t 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.yag 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.oLD ]] 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.oLD 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Fec 00:18:28.336 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.BGw ]] 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BGw 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ttP 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.fEm ]] 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.fEm 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.xXp 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:18:28.337 20:34:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:28.594 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:28.594 Waiting for block devices as requested 00:18:28.594 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:28.851 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:29.416 20:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:29.416 20:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:18:29.416 20:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:18:29.416 20:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:18:29.416 20:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:29.416 20:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:29.416 20:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:18:29.416 20:34:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:18:29.416 20:34:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:18:29.416 No valid GPT data, bailing 00:18:29.416 20:34:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:18:29.416 20:34:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:29.416 20:34:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:18:29.417 No valid GPT data, bailing 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:18:29.417 No valid GPT data, bailing 00:18:29.417 20:34:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:18:29.675 20:34:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:29.675 20:34:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:29.675 20:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:18:29.675 20:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:18:29.675 20:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:18:29.675 20:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:18:29.675 20:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:18:29.675 20:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:29.675 20:34:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:18:29.675 20:34:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:18:29.675 20:34:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:18:29.675 20:34:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:18:29.675 No valid GPT data, bailing 00:18:29.675 20:34:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -a 10.0.0.1 -t tcp -s 4420 00:18:29.675 00:18:29.675 Discovery Log Number of Records 2, Generation counter 2 00:18:29.675 =====Discovery Log Entry 0====== 00:18:29.675 trtype: tcp 00:18:29.675 adrfam: ipv4 00:18:29.675 subtype: current discovery subsystem 00:18:29.675 treq: not specified, sq flow control disable supported 00:18:29.675 portid: 1 00:18:29.675 trsvcid: 4420 00:18:29.675 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:29.675 traddr: 10.0.0.1 00:18:29.675 eflags: none 00:18:29.675 sectype: none 00:18:29.675 =====Discovery Log Entry 1====== 00:18:29.675 trtype: tcp 00:18:29.675 adrfam: ipv4 00:18:29.675 subtype: nvme subsystem 00:18:29.675 treq: not specified, sq flow control disable supported 00:18:29.675 portid: 1 00:18:29.675 trsvcid: 4420 00:18:29.675 subnqn: nqn.2024-02.io.spdk:cnode0 00:18:29.675 traddr: 10.0.0.1 00:18:29.675 eflags: none 00:18:29.675 sectype: none 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:29.675 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: ]] 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.933 nvme0n1 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:29.933 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: ]] 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.934 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.192 nvme0n1 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: ]] 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:30.192 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:30.193 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:30.193 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:30.193 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:30.193 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:30.193 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.193 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.193 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.451 nvme0n1 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: ]] 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.451 nvme0n1 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.451 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: ]] 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.711 20:34:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.711 nvme0n1 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.711 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.970 nvme0n1 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:30.970 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:31.228 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:31.228 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: ]] 00:18:31.228 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:31.228 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:18:31.228 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:31.228 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:31.228 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:31.228 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:31.229 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:31.229 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:31.229 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.229 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.229 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.229 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:31.229 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:31.229 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:31.229 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:31.229 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:31.229 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:31.229 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:31.229 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:31.229 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:31.229 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:31.229 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:31.229 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.229 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.229 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.487 nvme0n1 00:18:31.487 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.487 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.487 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.487 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.487 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.487 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.487 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.487 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:31.487 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.487 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.487 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.487 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:31.487 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:18:31.487 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:31.487 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:31.487 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:31.487 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:31.487 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:31.487 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: ]] 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.488 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.745 nvme0n1 00:18:31.745 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.745 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.745 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.745 20:34:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.745 20:34:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.745 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: ]] 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.746 nvme0n1 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:31.746 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: ]] 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.004 nvme0n1 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.004 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.262 nvme0n1 00:18:32.262 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.262 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:32.262 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.262 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.262 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:32.262 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.262 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.262 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:32.262 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.262 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.262 20:34:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.262 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.262 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:32.262 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:18:32.262 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:32.262 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:32.262 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:32.262 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:32.262 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:32.263 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:32.263 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:32.263 20:34:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: ]] 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.828 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.086 nvme0n1 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: ]] 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:33.086 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:33.087 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.087 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:33.087 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.087 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.087 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.087 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.087 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:33.087 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:33.087 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:33.087 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.087 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.087 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:33.087 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.087 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:33.087 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:33.087 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:33.087 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.087 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.087 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.345 nvme0n1 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: ]] 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.346 20:34:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.604 nvme0n1 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: ]] 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.604 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.861 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.862 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:33.862 20:34:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:33.862 20:34:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:33.862 20:34:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:33.862 20:34:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:33.862 20:34:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:33.862 20:34:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:33.862 20:34:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:33.862 20:34:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:33.862 20:34:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:33.862 20:34:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:33.862 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:33.862 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.862 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.862 nvme0n1 00:18:33.862 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.862 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:33.862 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:33.862 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.862 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:33.862 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.118 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.118 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.119 nvme0n1 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:34.119 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.376 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.376 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:34.376 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.376 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:34.376 20:34:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.376 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:34.376 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:34.376 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:18:34.376 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:34.376 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:34.376 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:34.376 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:34.376 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:34.376 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:34.376 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:34.376 20:34:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:36.274 20:34:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:36.274 20:34:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: ]] 00:18:36.274 20:34:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:36.274 20:34:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.275 20:34:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.532 nvme0n1 00:18:36.532 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.532 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:36.532 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.532 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.532 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:36.532 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.790 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.790 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:36.790 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: ]] 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.791 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.048 nvme0n1 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: ]] 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.048 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.619 nvme0n1 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: ]] 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:37.619 20:34:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:37.620 20:34:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:37.620 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.620 20:34:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.877 nvme0n1 00:18:37.877 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.877 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:37.877 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.877 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:37.877 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:37.877 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:38.135 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.136 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.394 nvme0n1 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: ]] 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.394 20:34:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.330 nvme0n1 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: ]] 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:39.330 20:35:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:39.331 20:35:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:39.331 20:35:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:39.331 20:35:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:39.331 20:35:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:39.331 20:35:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:39.331 20:35:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.331 20:35:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.331 20:35:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.897 nvme0n1 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: ]] 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:39.897 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.155 20:35:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.724 nvme0n1 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: ]] 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.724 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.660 nvme0n1 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.660 20:35:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.593 nvme0n1 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: ]] 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.593 nvme0n1 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.593 20:35:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: ]] 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:42.593 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.594 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.594 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.850 nvme0n1 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:42.850 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: ]] 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.851 nvme0n1 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:42.851 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: ]] 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.109 nvme0n1 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.109 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:43.110 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.110 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.110 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.110 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.110 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:43.110 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:43.110 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:43.110 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.110 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.110 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:43.110 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.110 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:43.110 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:43.110 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:43.110 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:43.110 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.110 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.368 nvme0n1 00:18:43.368 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.368 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.368 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: ]] 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.369 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.628 nvme0n1 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: ]] 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.628 20:35:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.628 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.628 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.628 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:43.628 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:43.628 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:43.628 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.628 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.628 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:43.628 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.628 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:43.629 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:43.629 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:43.629 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.629 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.629 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.888 nvme0n1 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: ]] 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.888 nvme0n1 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.888 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: ]] 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.147 nvme0n1 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.147 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.406 nvme0n1 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: ]] 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.406 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.407 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:44.407 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:44.407 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:44.407 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.407 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.407 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:44.407 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.407 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:44.407 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:44.407 20:35:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:44.407 20:35:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.407 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.407 20:35:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.665 nvme0n1 00:18:44.665 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.665 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:44.665 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.665 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:44.665 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.665 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: ]] 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.924 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.183 nvme0n1 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: ]] 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.183 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.442 nvme0n1 00:18:45.442 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.442 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.442 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.442 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.442 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.442 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.442 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.442 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.442 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.442 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.442 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.442 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.442 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:18:45.442 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: ]] 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.443 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.701 nvme0n1 00:18:45.701 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.701 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.701 20:35:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.701 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.701 20:35:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.701 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.959 nvme0n1 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: ]] 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:45.959 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:45.960 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:45.960 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:45.960 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.960 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.960 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.960 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:45.960 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:45.960 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:45.960 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:45.960 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:45.960 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:45.960 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:45.960 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:45.960 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:45.960 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:45.960 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:45.960 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.960 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.960 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.524 nvme0n1 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: ]] 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.524 20:35:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.090 nvme0n1 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: ]] 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.090 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.348 nvme0n1 00:18:47.348 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.348 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.348 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.348 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.348 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.348 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.348 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.348 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.348 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.348 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: ]] 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:47.606 20:35:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:47.607 20:35:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:47.607 20:35:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.607 20:35:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.607 20:35:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:47.607 20:35:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.607 20:35:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:47.607 20:35:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:47.607 20:35:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:47.607 20:35:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:47.607 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.607 20:35:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.865 nvme0n1 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:47.865 20:35:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:47.866 20:35:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:47.866 20:35:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:47.866 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:47.866 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.866 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.431 nvme0n1 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: ]] 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.431 20:35:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.996 nvme0n1 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: ]] 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:48.996 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:48.997 20:35:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.997 20:35:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.997 20:35:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.997 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:48.997 20:35:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:48.997 20:35:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:48.997 20:35:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:48.997 20:35:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:48.997 20:35:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:48.997 20:35:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:48.997 20:35:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:48.997 20:35:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:48.997 20:35:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:48.997 20:35:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:48.997 20:35:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.997 20:35:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.997 20:35:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.931 nvme0n1 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: ]] 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.932 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.499 nvme0n1 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: ]] 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:50.499 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.500 20:35:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.067 nvme0n1 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:51.067 20:35:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:51.326 20:35:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:51.326 20:35:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.326 20:35:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.893 nvme0n1 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: ]] 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.894 nvme0n1 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.894 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: ]] 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:52.152 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.153 nvme0n1 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: ]] 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.153 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.410 nvme0n1 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: ]] 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:52.410 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:52.411 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.411 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.411 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:52.411 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.411 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:52.411 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:52.411 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:52.411 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:52.411 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.411 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.411 nvme0n1 00:18:52.411 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.411 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.411 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.411 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.411 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.411 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.687 20:35:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.687 nvme0n1 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: ]] 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.688 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.947 nvme0n1 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: ]] 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.947 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.206 nvme0n1 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: ]] 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.206 nvme0n1 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.206 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: ]] 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.466 nvme0n1 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.466 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.725 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.725 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.725 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:53.725 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:53.725 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:53.725 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.725 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.725 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:53.725 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.725 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:53.725 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:53.725 20:35:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:53.725 20:35:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:53.725 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.725 20:35:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.725 nvme0n1 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: ]] 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.725 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.984 nvme0n1 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: ]] 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.984 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.242 nvme0n1 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: ]] 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.242 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.501 nvme0n1 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: ]] 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.501 20:35:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.759 nvme0n1 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.759 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:55.017 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:55.018 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.018 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.018 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:55.018 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.018 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:55.018 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:55.018 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:55.018 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:55.018 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.018 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.018 nvme0n1 00:18:55.018 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.018 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.018 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:55.018 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.018 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.018 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.018 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.018 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.018 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.018 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: ]] 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.276 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.534 nvme0n1 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: ]] 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:55.534 20:35:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.535 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.535 20:35:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.101 nvme0n1 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: ]] 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.101 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.359 nvme0n1 00:18:56.359 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: ]] 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.360 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.618 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.618 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:56.618 20:35:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:56.618 20:35:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:56.618 20:35:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:56.618 20:35:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.618 20:35:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.618 20:35:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:56.618 20:35:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.618 20:35:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:56.618 20:35:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:56.618 20:35:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:56.618 20:35:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:56.618 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.618 20:35:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.877 nvme0n1 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.877 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.444 nvme0n1 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWEwZjhhYTA3NjYwYTY5NzViMDgxYjgyNjc2YzEwNziCTEZY: 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: ]] 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmJhNzQzZGViN2IyNjg1ZTgxYzNiZWU0NGM4MTJiZGVlZGE3NzE4YTYxYThiYjg1Zjc5ZDljMmIzMmZmYmNkMG6wMaw=: 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.444 20:35:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.010 nvme0n1 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: ]] 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:18:58.010 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.011 20:35:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.945 nvme0n1 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmU2MTc3MDA5NjIyNDY0ZmMyMzdhMzQyNjE5NjQ3NWVwbWgs: 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: ]] 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGQ1ZjYwZGQxYzc4ZjNkNTRlZThhZWNmMjRiMTM0NzbIiPlS: 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.945 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.512 nvme0n1 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWJiMTc0ZjA3NzM2NGVjNGM5NjY1OWI2MTVmODRiNmNjMzMxYWQxOTVjODUwN2Jmp1C5VQ==: 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: ]] 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTk0ODJmMmE0ZDVhMWRkNTE4NGRmYmU2MjFmYjIxNjVhUxyk: 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.512 20:35:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.077 nvme0n1 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNhOGMzM2VhOTEyY2U4OTgzNzNmZTU3NmRhMzAwZGQzY2Q2NjQ3OGZjNDdhMzg1YTJhYmFmNzMxNmU1MDdiZI+Ncvc=: 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.077 20:35:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.336 20:35:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.336 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:00.336 20:35:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:00.336 20:35:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:00.336 20:35:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:00.336 20:35:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:00.336 20:35:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:00.336 20:35:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:00.336 20:35:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:00.336 20:35:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:00.336 20:35:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:00.336 20:35:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:00.336 20:35:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:00.336 20:35:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.336 20:35:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.902 nvme0n1 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDQwNWNjOWQzMDEzNGZlNDNiZDYxNDMyMjUwYTQ4YmQ5MDhkZmY5OTk3M2ZhMzYy8G8HIA==: 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: ]] 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTVhMTVjNTA4Mjc5ZTQzMDQwZTZkNDM5ODhiZTRhMWU1OTI5M2FjNDE3ZGI5MjdjdFcGTA==: 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.902 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.159 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.159 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:19:01.159 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:01.159 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:01.159 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:01.159 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:01.159 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:01.159 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:01.159 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:01.159 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:01.159 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:01.159 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.160 2024/07/15 20:35:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:01.160 request: 00:19:01.160 { 00:19:01.160 "method": "bdev_nvme_attach_controller", 00:19:01.160 "params": { 00:19:01.160 "name": "nvme0", 00:19:01.160 "trtype": "tcp", 00:19:01.160 "traddr": "10.0.0.1", 00:19:01.160 "adrfam": "ipv4", 00:19:01.160 "trsvcid": "4420", 00:19:01.160 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:01.160 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:01.160 "prchk_reftag": false, 00:19:01.160 "prchk_guard": false, 00:19:01.160 "hdgst": false, 00:19:01.160 "ddgst": false 00:19:01.160 } 00:19:01.160 } 00:19:01.160 Got JSON-RPC error response 00:19:01.160 GoRPCClient: error on JSON-RPC call 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.160 2024/07/15 20:35:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:01.160 request: 00:19:01.160 { 00:19:01.160 "method": "bdev_nvme_attach_controller", 00:19:01.160 "params": { 00:19:01.160 "name": "nvme0", 00:19:01.160 "trtype": "tcp", 00:19:01.160 "traddr": "10.0.0.1", 00:19:01.160 "adrfam": "ipv4", 00:19:01.160 "trsvcid": "4420", 00:19:01.160 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:01.160 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:01.160 "prchk_reftag": false, 00:19:01.160 "prchk_guard": false, 00:19:01.160 "hdgst": false, 00:19:01.160 "ddgst": false, 00:19:01.160 "dhchap_key": "key2" 00:19:01.160 } 00:19:01.160 } 00:19:01.160 Got JSON-RPC error response 00:19:01.160 GoRPCClient: error on JSON-RPC call 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.160 2024/07/15 20:35:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:01.160 request: 00:19:01.160 { 00:19:01.160 "method": "bdev_nvme_attach_controller", 00:19:01.160 "params": { 00:19:01.160 "name": "nvme0", 00:19:01.160 "trtype": "tcp", 00:19:01.160 "traddr": "10.0.0.1", 00:19:01.160 "adrfam": "ipv4", 00:19:01.160 "trsvcid": "4420", 00:19:01.160 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:01.160 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:01.160 "prchk_reftag": false, 00:19:01.160 "prchk_guard": false, 00:19:01.160 "hdgst": false, 00:19:01.160 "ddgst": false, 00:19:01.160 "dhchap_key": "key1", 00:19:01.160 "dhchap_ctrlr_key": "ckey2" 00:19:01.160 } 00:19:01.160 } 00:19:01.160 Got JSON-RPC error response 00:19:01.160 GoRPCClient: error on JSON-RPC call 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:01.160 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:19:01.161 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:01.161 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:01.161 rmmod nvme_tcp 00:19:01.418 rmmod nvme_fabrics 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 91386 ']' 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 91386 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 91386 ']' 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 91386 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 91386 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:01.418 killing process with pid 91386 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 91386' 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 91386 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 91386 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:01.418 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:01.677 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:01.677 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:19:01.677 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:19:01.677 20:35:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:02.240 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:02.240 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:02.240 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:02.498 20:35:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.SxN /tmp/spdk.key-null.yag /tmp/spdk.key-sha256.Fec /tmp/spdk.key-sha384.ttP /tmp/spdk.key-sha512.xXp /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:19:02.498 20:35:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:02.755 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:02.755 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:02.755 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:02.756 ************************************ 00:19:02.756 END TEST nvmf_auth_host 00:19:02.756 ************************************ 00:19:02.756 00:19:02.756 real 0m37.024s 00:19:02.756 user 0m32.840s 00:19:02.756 sys 0m3.455s 00:19:02.756 20:35:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:02.756 20:35:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.756 20:35:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:02.756 20:35:24 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:19:02.756 20:35:24 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:02.756 20:35:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:02.756 20:35:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:02.756 20:35:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:02.756 ************************************ 00:19:02.756 START TEST nvmf_digest 00:19:02.756 ************************************ 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:02.756 * Looking for test storage... 00:19:02.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:02.756 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:03.015 Cannot find device "nvmf_tgt_br" 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:03.015 Cannot find device "nvmf_tgt_br2" 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:03.015 Cannot find device "nvmf_tgt_br" 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:03.015 Cannot find device "nvmf_tgt_br2" 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:03.015 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:03.015 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:03.015 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:03.273 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:03.273 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:03.273 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:03.273 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:03.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:03.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:19:03.273 00:19:03.273 --- 10.0.0.2 ping statistics --- 00:19:03.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.273 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:03.273 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:03.273 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:03.273 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:19:03.273 00:19:03.273 --- 10.0.0.3 ping statistics --- 00:19:03.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.273 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:03.273 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:03.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:03.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:19:03.273 00:19:03.273 --- 10.0.0.1 ping statistics --- 00:19:03.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.273 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:03.273 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:03.273 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:19:03.273 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:03.273 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:03.273 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:03.274 ************************************ 00:19:03.274 START TEST nvmf_digest_clean 00:19:03.274 ************************************ 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=92995 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 92995 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 92995 ']' 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:03.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:03.274 20:35:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:03.274 [2024-07-15 20:35:24.640469] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:03.274 [2024-07-15 20:35:24.640568] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.532 [2024-07-15 20:35:24.776400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.532 [2024-07-15 20:35:24.837513] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.532 [2024-07-15 20:35:24.837570] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.532 [2024-07-15 20:35:24.837581] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.532 [2024-07-15 20:35:24.837589] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.532 [2024-07-15 20:35:24.837597] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.532 [2024-07-15 20:35:24.837622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:04.466 null0 00:19:04.466 [2024-07-15 20:35:25.773584] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.466 [2024-07-15 20:35:25.797714] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:04.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93045 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93045 /var/tmp/bperf.sock 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93045 ']' 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:04.466 20:35:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:04.466 [2024-07-15 20:35:25.856323] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:04.466 [2024-07-15 20:35:25.856608] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93045 ] 00:19:04.724 [2024-07-15 20:35:25.991627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.724 [2024-07-15 20:35:26.050432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.724 20:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:04.724 20:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:04.724 20:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:04.724 20:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:04.724 20:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:04.982 20:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:04.982 20:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:05.240 nvme0n1 00:19:05.498 20:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:05.498 20:35:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:05.498 Running I/O for 2 seconds... 00:19:07.396 00:19:07.396 Latency(us) 00:19:07.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.396 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:07.396 nvme0n1 : 2.00 17597.16 68.74 0.00 0.00 7265.51 3425.75 18111.77 00:19:07.396 =================================================================================================================== 00:19:07.396 Total : 17597.16 68.74 0.00 0.00 7265.51 3425.75 18111.77 00:19:07.396 0 00:19:07.396 20:35:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:07.396 20:35:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:07.396 20:35:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:07.396 | select(.opcode=="crc32c") 00:19:07.396 | "\(.module_name) \(.executed)"' 00:19:07.396 20:35:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:07.396 20:35:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93045 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93045 ']' 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93045 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93045 00:19:07.959 killing process with pid 93045 00:19:07.959 Received shutdown signal, test time was about 2.000000 seconds 00:19:07.959 00:19:07.959 Latency(us) 00:19:07.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.959 =================================================================================================================== 00:19:07.959 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93045' 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93045 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93045 00:19:07.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93122 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93122 /var/tmp/bperf.sock 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93122 ']' 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.959 20:35:29 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:08.217 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:08.217 Zero copy mechanism will not be used. 00:19:08.217 [2024-07-15 20:35:29.479393] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:08.217 [2024-07-15 20:35:29.479486] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93122 ] 00:19:08.217 [2024-07-15 20:35:29.615367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.217 [2024-07-15 20:35:29.674499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.170 20:35:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:09.170 20:35:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:09.170 20:35:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:09.170 20:35:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:09.170 20:35:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:09.428 20:35:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:09.428 20:35:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:09.993 nvme0n1 00:19:09.993 20:35:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:09.993 20:35:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:09.993 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:09.993 Zero copy mechanism will not be used. 00:19:09.993 Running I/O for 2 seconds... 00:19:11.891 00:19:11.891 Latency(us) 00:19:11.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.891 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:11.891 nvme0n1 : 2.00 6913.12 864.14 0.00 0.00 2310.20 647.91 8460.10 00:19:11.891 =================================================================================================================== 00:19:11.891 Total : 6913.12 864.14 0.00 0.00 2310.20 647.91 8460.10 00:19:11.891 0 00:19:11.891 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:11.891 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:11.891 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:11.891 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:11.891 | select(.opcode=="crc32c") 00:19:11.891 | "\(.module_name) \(.executed)"' 00:19:11.891 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:12.455 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:12.455 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:12.455 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:12.455 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:12.455 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93122 00:19:12.455 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93122 ']' 00:19:12.455 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93122 00:19:12.455 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:12.455 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:12.455 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93122 00:19:12.455 killing process with pid 93122 00:19:12.456 Received shutdown signal, test time was about 2.000000 seconds 00:19:12.456 00:19:12.456 Latency(us) 00:19:12.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.456 =================================================================================================================== 00:19:12.456 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93122' 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93122 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93122 00:19:12.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93212 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93212 /var/tmp/bperf.sock 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93212 ']' 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.456 20:35:33 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:12.456 [2024-07-15 20:35:33.908973] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:12.456 [2024-07-15 20:35:33.909074] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93212 ] 00:19:12.714 [2024-07-15 20:35:34.042556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.714 [2024-07-15 20:35:34.103403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.647 20:35:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:13.647 20:35:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:13.647 20:35:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:13.647 20:35:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:13.647 20:35:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:13.904 20:35:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:13.904 20:35:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:14.162 nvme0n1 00:19:14.162 20:35:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:14.162 20:35:35 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:14.420 Running I/O for 2 seconds... 00:19:16.321 00:19:16.321 Latency(us) 00:19:16.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.321 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:16.321 nvme0n1 : 2.01 20685.35 80.80 0.00 0.00 6180.88 2502.28 14120.03 00:19:16.321 =================================================================================================================== 00:19:16.321 Total : 20685.35 80.80 0.00 0.00 6180.88 2502.28 14120.03 00:19:16.321 0 00:19:16.321 20:35:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:16.321 20:35:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:16.321 20:35:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:16.321 20:35:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:16.321 20:35:37 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:16.321 | select(.opcode=="crc32c") 00:19:16.321 | "\(.module_name) \(.executed)"' 00:19:16.579 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:16.579 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:16.579 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:16.579 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:16.579 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93212 00:19:16.579 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93212 ']' 00:19:16.579 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93212 00:19:16.579 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:16.579 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:16.579 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93212 00:19:16.837 killing process with pid 93212 00:19:16.837 Received shutdown signal, test time was about 2.000000 seconds 00:19:16.837 00:19:16.837 Latency(us) 00:19:16.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.837 =================================================================================================================== 00:19:16.837 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93212' 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93212 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93212 00:19:16.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93298 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93298 /var/tmp/bperf.sock 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 93298 ']' 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:16.837 20:35:38 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:16.837 [2024-07-15 20:35:38.324421] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:16.837 [2024-07-15 20:35:38.324798] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93298 ] 00:19:16.837 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:16.837 Zero copy mechanism will not be used. 00:19:17.096 [2024-07-15 20:35:38.464421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.096 [2024-07-15 20:35:38.526357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.031 20:35:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:18.031 20:35:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:19:18.031 20:35:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:18.031 20:35:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:18.031 20:35:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:18.289 20:35:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:18.289 20:35:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:18.546 nvme0n1 00:19:18.546 20:35:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:18.546 20:35:39 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:18.804 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:18.804 Zero copy mechanism will not be used. 00:19:18.804 Running I/O for 2 seconds... 00:19:20.703 00:19:20.703 Latency(us) 00:19:20.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.703 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:20.703 nvme0n1 : 2.00 6654.29 831.79 0.00 0.00 2398.61 1817.13 11141.12 00:19:20.703 =================================================================================================================== 00:19:20.703 Total : 6654.29 831.79 0.00 0.00 2398.61 1817.13 11141.12 00:19:20.703 0 00:19:20.703 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:20.703 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:20.703 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:20.703 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:20.703 | select(.opcode=="crc32c") 00:19:20.703 | "\(.module_name) \(.executed)"' 00:19:20.703 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:20.962 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:20.962 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:20.962 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:20.962 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:20.962 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93298 00:19:20.962 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 93298 ']' 00:19:20.962 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 93298 00:19:20.962 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:20.962 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:20.962 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93298 00:19:20.962 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:20.962 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:20.962 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93298' 00:19:20.962 killing process with pid 93298 00:19:20.962 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 93298 00:19:20.962 Received shutdown signal, test time was about 2.000000 seconds 00:19:20.962 00:19:20.962 Latency(us) 00:19:20.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.962 =================================================================================================================== 00:19:20.962 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:20.962 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 93298 00:19:21.220 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 92995 00:19:21.220 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 92995 ']' 00:19:21.220 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 92995 00:19:21.220 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:19:21.220 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:21.220 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 92995 00:19:21.220 killing process with pid 92995 00:19:21.220 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:21.220 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:21.220 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 92995' 00:19:21.220 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 92995 00:19:21.220 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 92995 00:19:21.479 00:19:21.479 real 0m18.198s 00:19:21.479 user 0m35.418s 00:19:21.479 sys 0m4.329s 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:21.479 ************************************ 00:19:21.479 END TEST nvmf_digest_clean 00:19:21.479 ************************************ 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:21.479 ************************************ 00:19:21.479 START TEST nvmf_digest_error 00:19:21.479 ************************************ 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=93417 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 93417 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93417 ']' 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:21.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:21.479 20:35:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:21.479 [2024-07-15 20:35:42.893555] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:21.479 [2024-07-15 20:35:42.893660] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.737 [2024-07-15 20:35:43.031261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.737 [2024-07-15 20:35:43.089913] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.737 [2024-07-15 20:35:43.089969] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.737 [2024-07-15 20:35:43.089980] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.737 [2024-07-15 20:35:43.089989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.737 [2024-07-15 20:35:43.089996] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.737 [2024-07-15 20:35:43.090022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.737 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:21.737 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:21.737 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:21.737 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:21.737 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:21.737 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.737 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:19:21.737 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.737 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:21.737 [2024-07-15 20:35:43.162409] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:19:21.737 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.737 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:19:21.737 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:19:21.737 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.737 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:21.737 null0 00:19:21.737 [2024-07-15 20:35:43.235383] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.995 [2024-07-15 20:35:43.259491] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.995 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.995 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:19:21.995 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:21.995 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:21.995 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:21.995 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:21.995 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93442 00:19:21.995 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93442 /var/tmp/bperf.sock 00:19:21.995 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:19:21.995 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93442 ']' 00:19:21.995 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:21.995 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:21.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:21.995 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:21.995 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:21.995 20:35:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:21.995 [2024-07-15 20:35:43.319797] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:21.995 [2024-07-15 20:35:43.319905] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93442 ] 00:19:21.995 [2024-07-15 20:35:43.459789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.252 [2024-07-15 20:35:43.543127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.878 20:35:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:22.878 20:35:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:22.878 20:35:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:22.878 20:35:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:23.444 20:35:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:23.444 20:35:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.444 20:35:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:23.444 20:35:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.444 20:35:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:23.444 20:35:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:23.702 nvme0n1 00:19:23.702 20:35:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:23.702 20:35:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.702 20:35:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:23.702 20:35:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.702 20:35:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:23.702 20:35:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:23.702 Running I/O for 2 seconds... 00:19:23.702 [2024-07-15 20:35:45.179590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:23.702 [2024-07-15 20:35:45.179674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.702 [2024-07-15 20:35:45.179701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.702 [2024-07-15 20:35:45.195764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:23.702 [2024-07-15 20:35:45.195850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.702 [2024-07-15 20:35:45.195897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.960 [2024-07-15 20:35:45.211302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:23.960 [2024-07-15 20:35:45.211384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.961 [2024-07-15 20:35:45.211411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.961 [2024-07-15 20:35:45.230537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:23.961 [2024-07-15 20:35:45.230602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.961 [2024-07-15 20:35:45.230616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.961 [2024-07-15 20:35:45.243504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:23.961 [2024-07-15 20:35:45.243554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.961 [2024-07-15 20:35:45.243568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.961 [2024-07-15 20:35:45.257808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:23.961 [2024-07-15 20:35:45.257881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.961 [2024-07-15 20:35:45.257898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.961 [2024-07-15 20:35:45.272597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:23.961 [2024-07-15 20:35:45.272653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.961 [2024-07-15 20:35:45.272668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.961 [2024-07-15 20:35:45.287942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:23.961 [2024-07-15 20:35:45.287999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.961 [2024-07-15 20:35:45.288021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.961 [2024-07-15 20:35:45.300174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:23.961 [2024-07-15 20:35:45.300228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.961 [2024-07-15 20:35:45.300243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.961 [2024-07-15 20:35:45.314087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:23.961 [2024-07-15 20:35:45.314138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.961 [2024-07-15 20:35:45.314153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.961 [2024-07-15 20:35:45.328617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:23.961 [2024-07-15 20:35:45.328670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.961 [2024-07-15 20:35:45.328695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.961 [2024-07-15 20:35:45.342786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:23.961 [2024-07-15 20:35:45.342849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.961 [2024-07-15 20:35:45.342865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.961 [2024-07-15 20:35:45.357972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:23.961 [2024-07-15 20:35:45.358043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.961 [2024-07-15 20:35:45.358061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.961 [2024-07-15 20:35:45.373661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:23.961 [2024-07-15 20:35:45.373721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.961 [2024-07-15 20:35:45.373736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.961 [2024-07-15 20:35:45.387734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:23.961 [2024-07-15 20:35:45.387788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.961 [2024-07-15 20:35:45.387803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.961 [2024-07-15 20:35:45.400499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:23.961 [2024-07-15 20:35:45.400560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.961 [2024-07-15 20:35:45.400576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.961 [2024-07-15 20:35:45.413669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:23.961 [2024-07-15 20:35:45.413726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.961 [2024-07-15 20:35:45.413741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.961 [2024-07-15 20:35:45.431188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:23.961 [2024-07-15 20:35:45.431268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.961 [2024-07-15 20:35:45.431296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.961 [2024-07-15 20:35:45.443449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:23.961 [2024-07-15 20:35:45.443512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.961 [2024-07-15 20:35:45.443528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:23.961 [2024-07-15 20:35:45.458495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:23.961 [2024-07-15 20:35:45.458568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:23.961 [2024-07-15 20:35:45.458584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.219 [2024-07-15 20:35:45.473272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.219 [2024-07-15 20:35:45.473339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.220 [2024-07-15 20:35:45.473355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.220 [2024-07-15 20:35:45.487751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.220 [2024-07-15 20:35:45.487805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.220 [2024-07-15 20:35:45.487820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.220 [2024-07-15 20:35:45.502679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.220 [2024-07-15 20:35:45.502753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.220 [2024-07-15 20:35:45.502769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.220 [2024-07-15 20:35:45.515074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.220 [2024-07-15 20:35:45.515146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.220 [2024-07-15 20:35:45.515162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.220 [2024-07-15 20:35:45.530155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.220 [2024-07-15 20:35:45.530223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.220 [2024-07-15 20:35:45.530239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.220 [2024-07-15 20:35:45.544757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.220 [2024-07-15 20:35:45.544827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.220 [2024-07-15 20:35:45.544842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.220 [2024-07-15 20:35:45.560123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.220 [2024-07-15 20:35:45.560166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.220 [2024-07-15 20:35:45.560181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.220 [2024-07-15 20:35:45.573864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.220 [2024-07-15 20:35:45.573917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.220 [2024-07-15 20:35:45.573932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.220 [2024-07-15 20:35:45.585841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.220 [2024-07-15 20:35:45.585901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.220 [2024-07-15 20:35:45.585915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.220 [2024-07-15 20:35:45.601351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.220 [2024-07-15 20:35:45.601394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.220 [2024-07-15 20:35:45.601409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.220 [2024-07-15 20:35:45.615596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.220 [2024-07-15 20:35:45.615640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.220 [2024-07-15 20:35:45.615654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.220 [2024-07-15 20:35:45.630616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.220 [2024-07-15 20:35:45.630659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.220 [2024-07-15 20:35:45.630674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.220 [2024-07-15 20:35:45.645243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.220 [2024-07-15 20:35:45.645285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.220 [2024-07-15 20:35:45.645300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.220 [2024-07-15 20:35:45.660126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.220 [2024-07-15 20:35:45.660192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.220 [2024-07-15 20:35:45.660209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.220 [2024-07-15 20:35:45.675578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.220 [2024-07-15 20:35:45.675631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.220 [2024-07-15 20:35:45.675646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.220 [2024-07-15 20:35:45.690192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.220 [2024-07-15 20:35:45.690252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.220 [2024-07-15 20:35:45.690268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.220 [2024-07-15 20:35:45.705619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.220 [2024-07-15 20:35:45.705690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.220 [2024-07-15 20:35:45.705707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.478 [2024-07-15 20:35:45.720768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.478 [2024-07-15 20:35:45.720827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.478 [2024-07-15 20:35:45.720842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.478 [2024-07-15 20:35:45.736434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.478 [2024-07-15 20:35:45.736502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.478 [2024-07-15 20:35:45.736523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.478 [2024-07-15 20:35:45.751591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.478 [2024-07-15 20:35:45.751664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.478 [2024-07-15 20:35:45.751684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.479 [2024-07-15 20:35:45.764393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.479 [2024-07-15 20:35:45.764453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.479 [2024-07-15 20:35:45.764469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.479 [2024-07-15 20:35:45.780225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.479 [2024-07-15 20:35:45.780277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.479 [2024-07-15 20:35:45.780293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.479 [2024-07-15 20:35:45.793903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.479 [2024-07-15 20:35:45.793953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.479 [2024-07-15 20:35:45.793969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.479 [2024-07-15 20:35:45.808341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.479 [2024-07-15 20:35:45.808393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.479 [2024-07-15 20:35:45.808408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.479 [2024-07-15 20:35:45.830439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.479 [2024-07-15 20:35:45.830499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.479 [2024-07-15 20:35:45.830516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.479 [2024-07-15 20:35:45.844714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.479 [2024-07-15 20:35:45.844761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.479 [2024-07-15 20:35:45.844777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.479 [2024-07-15 20:35:45.857759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.479 [2024-07-15 20:35:45.857814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.479 [2024-07-15 20:35:45.857834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.479 [2024-07-15 20:35:45.870811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.479 [2024-07-15 20:35:45.870857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.479 [2024-07-15 20:35:45.870895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.479 [2024-07-15 20:35:45.886259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.479 [2024-07-15 20:35:45.886316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.479 [2024-07-15 20:35:45.886333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.479 [2024-07-15 20:35:45.901151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.479 [2024-07-15 20:35:45.901205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.479 [2024-07-15 20:35:45.901234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.479 [2024-07-15 20:35:45.917648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.479 [2024-07-15 20:35:45.917698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.479 [2024-07-15 20:35:45.917719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.479 [2024-07-15 20:35:45.931542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.479 [2024-07-15 20:35:45.931592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.479 [2024-07-15 20:35:45.931613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.479 [2024-07-15 20:35:45.944463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.479 [2024-07-15 20:35:45.944516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.479 [2024-07-15 20:35:45.944537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.479 [2024-07-15 20:35:45.958997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.479 [2024-07-15 20:35:45.959055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.479 [2024-07-15 20:35:45.959082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.479 [2024-07-15 20:35:45.974458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.479 [2024-07-15 20:35:45.974519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.479 [2024-07-15 20:35:45.974545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.737 [2024-07-15 20:35:45.996423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.737 [2024-07-15 20:35:45.996499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.737 [2024-07-15 20:35:45.996525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.737 [2024-07-15 20:35:46.015985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.737 [2024-07-15 20:35:46.016055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.737 [2024-07-15 20:35:46.016074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.737 [2024-07-15 20:35:46.031514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.737 [2024-07-15 20:35:46.031580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.737 [2024-07-15 20:35:46.031596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.737 [2024-07-15 20:35:46.047249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.737 [2024-07-15 20:35:46.047340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.737 [2024-07-15 20:35:46.047368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.737 [2024-07-15 20:35:46.064486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.737 [2024-07-15 20:35:46.064571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.737 [2024-07-15 20:35:46.064598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.737 [2024-07-15 20:35:46.082786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.737 [2024-07-15 20:35:46.082889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.737 [2024-07-15 20:35:46.082917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.738 [2024-07-15 20:35:46.098262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.738 [2024-07-15 20:35:46.098345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.738 [2024-07-15 20:35:46.098371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.738 [2024-07-15 20:35:46.113589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.738 [2024-07-15 20:35:46.113647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.738 [2024-07-15 20:35:46.113662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.738 [2024-07-15 20:35:46.129636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.738 [2024-07-15 20:35:46.129696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.738 [2024-07-15 20:35:46.129712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.738 [2024-07-15 20:35:46.145126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.738 [2024-07-15 20:35:46.145184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.738 [2024-07-15 20:35:46.145200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.738 [2024-07-15 20:35:46.161198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.738 [2024-07-15 20:35:46.161251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.738 [2024-07-15 20:35:46.161267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.738 [2024-07-15 20:35:46.173786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.738 [2024-07-15 20:35:46.173839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.738 [2024-07-15 20:35:46.173856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.738 [2024-07-15 20:35:46.188367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.738 [2024-07-15 20:35:46.188425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.738 [2024-07-15 20:35:46.188441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.738 [2024-07-15 20:35:46.205041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.738 [2024-07-15 20:35:46.205094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.738 [2024-07-15 20:35:46.205110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.738 [2024-07-15 20:35:46.219147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.738 [2024-07-15 20:35:46.219202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.738 [2024-07-15 20:35:46.219218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.738 [2024-07-15 20:35:46.233907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.738 [2024-07-15 20:35:46.233968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.738 [2024-07-15 20:35:46.233983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.996 [2024-07-15 20:35:46.248213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.996 [2024-07-15 20:35:46.248287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.996 [2024-07-15 20:35:46.248308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.996 [2024-07-15 20:35:46.264621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.996 [2024-07-15 20:35:46.264673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:11263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.996 [2024-07-15 20:35:46.264700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.996 [2024-07-15 20:35:46.276938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.996 [2024-07-15 20:35:46.276986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.996 [2024-07-15 20:35:46.277001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.996 [2024-07-15 20:35:46.291950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.996 [2024-07-15 20:35:46.292000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.996 [2024-07-15 20:35:46.292015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.996 [2024-07-15 20:35:46.305972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.996 [2024-07-15 20:35:46.306025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.996 [2024-07-15 20:35:46.306041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.996 [2024-07-15 20:35:46.322198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.996 [2024-07-15 20:35:46.322259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.996 [2024-07-15 20:35:46.322274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.996 [2024-07-15 20:35:46.336657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.996 [2024-07-15 20:35:46.336743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.996 [2024-07-15 20:35:46.336769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.996 [2024-07-15 20:35:46.351669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.996 [2024-07-15 20:35:46.351731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.996 [2024-07-15 20:35:46.351747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.996 [2024-07-15 20:35:46.367185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.996 [2024-07-15 20:35:46.367243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.996 [2024-07-15 20:35:46.367259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.996 [2024-07-15 20:35:46.382385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.996 [2024-07-15 20:35:46.382442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.996 [2024-07-15 20:35:46.382458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.997 [2024-07-15 20:35:46.396142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.997 [2024-07-15 20:35:46.396212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.997 [2024-07-15 20:35:46.396235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.997 [2024-07-15 20:35:46.412235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.997 [2024-07-15 20:35:46.412323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.997 [2024-07-15 20:35:46.412341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.997 [2024-07-15 20:35:46.429254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.997 [2024-07-15 20:35:46.429328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.997 [2024-07-15 20:35:46.429348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.997 [2024-07-15 20:35:46.446138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.997 [2024-07-15 20:35:46.446233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.997 [2024-07-15 20:35:46.446256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.997 [2024-07-15 20:35:46.462744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.997 [2024-07-15 20:35:46.462846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.997 [2024-07-15 20:35:46.462891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.997 [2024-07-15 20:35:46.477061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.997 [2024-07-15 20:35:46.477144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.997 [2024-07-15 20:35:46.477167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:24.997 [2024-07-15 20:35:46.494712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:24.997 [2024-07-15 20:35:46.494770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:24.997 [2024-07-15 20:35:46.494786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.255 [2024-07-15 20:35:46.507917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.255 [2024-07-15 20:35:46.507966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.255 [2024-07-15 20:35:46.507982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.255 [2024-07-15 20:35:46.522668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.255 [2024-07-15 20:35:46.522727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.255 [2024-07-15 20:35:46.522756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.255 [2024-07-15 20:35:46.539230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.255 [2024-07-15 20:35:46.539286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.255 [2024-07-15 20:35:46.539313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.255 [2024-07-15 20:35:46.554740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.255 [2024-07-15 20:35:46.554802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.255 [2024-07-15 20:35:46.554829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.255 [2024-07-15 20:35:46.570203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.255 [2024-07-15 20:35:46.570257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.255 [2024-07-15 20:35:46.570285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.255 [2024-07-15 20:35:46.583788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.255 [2024-07-15 20:35:46.583838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.255 [2024-07-15 20:35:46.583853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.255 [2024-07-15 20:35:46.599038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.255 [2024-07-15 20:35:46.599090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.255 [2024-07-15 20:35:46.599106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.255 [2024-07-15 20:35:46.614678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.255 [2024-07-15 20:35:46.614732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.255 [2024-07-15 20:35:46.614747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.255 [2024-07-15 20:35:46.629165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.255 [2024-07-15 20:35:46.629220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.255 [2024-07-15 20:35:46.629240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.255 [2024-07-15 20:35:46.644581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.255 [2024-07-15 20:35:46.644635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.255 [2024-07-15 20:35:46.644651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.255 [2024-07-15 20:35:46.659351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.255 [2024-07-15 20:35:46.659401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.255 [2024-07-15 20:35:46.659417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.255 [2024-07-15 20:35:46.672371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.255 [2024-07-15 20:35:46.672415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.255 [2024-07-15 20:35:46.672429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.255 [2024-07-15 20:35:46.686851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.255 [2024-07-15 20:35:46.686910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.255 [2024-07-15 20:35:46.686924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.255 [2024-07-15 20:35:46.702161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.255 [2024-07-15 20:35:46.702205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.255 [2024-07-15 20:35:46.702220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.255 [2024-07-15 20:35:46.715770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.255 [2024-07-15 20:35:46.715815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.256 [2024-07-15 20:35:46.715830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.256 [2024-07-15 20:35:46.728659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.256 [2024-07-15 20:35:46.728713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.256 [2024-07-15 20:35:46.728729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.256 [2024-07-15 20:35:46.744077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.256 [2024-07-15 20:35:46.744121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.256 [2024-07-15 20:35:46.744136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.513 [2024-07-15 20:35:46.757997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.513 [2024-07-15 20:35:46.758040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.513 [2024-07-15 20:35:46.758055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.513 [2024-07-15 20:35:46.772285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.514 [2024-07-15 20:35:46.772328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-07-15 20:35:46.772343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.514 [2024-07-15 20:35:46.786523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.514 [2024-07-15 20:35:46.786565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-07-15 20:35:46.786580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.514 [2024-07-15 20:35:46.800604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.514 [2024-07-15 20:35:46.800650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-07-15 20:35:46.800666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.514 [2024-07-15 20:35:46.815923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.514 [2024-07-15 20:35:46.815968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-07-15 20:35:46.815989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.514 [2024-07-15 20:35:46.830429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.514 [2024-07-15 20:35:46.830471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-07-15 20:35:46.830487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.514 [2024-07-15 20:35:46.842832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.514 [2024-07-15 20:35:46.842886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-07-15 20:35:46.842902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.514 [2024-07-15 20:35:46.858272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.514 [2024-07-15 20:35:46.858315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-07-15 20:35:46.858330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.514 [2024-07-15 20:35:46.873304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.514 [2024-07-15 20:35:46.873348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-07-15 20:35:46.873363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.514 [2024-07-15 20:35:46.887025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.514 [2024-07-15 20:35:46.887068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-07-15 20:35:46.887082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.514 [2024-07-15 20:35:46.900827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.514 [2024-07-15 20:35:46.900888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-07-15 20:35:46.900905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.514 [2024-07-15 20:35:46.916718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.514 [2024-07-15 20:35:46.916761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-07-15 20:35:46.916775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.514 [2024-07-15 20:35:46.931549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.514 [2024-07-15 20:35:46.931592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-07-15 20:35:46.931607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.514 [2024-07-15 20:35:46.943640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.514 [2024-07-15 20:35:46.943683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-07-15 20:35:46.943697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.514 [2024-07-15 20:35:46.959764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.514 [2024-07-15 20:35:46.959810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-07-15 20:35:46.959825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.514 [2024-07-15 20:35:46.975234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.514 [2024-07-15 20:35:46.975280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-07-15 20:35:46.975296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.514 [2024-07-15 20:35:46.988113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.514 [2024-07-15 20:35:46.988155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-07-15 20:35:46.988170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.514 [2024-07-15 20:35:47.001178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.514 [2024-07-15 20:35:47.001219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.514 [2024-07-15 20:35:47.001234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.772 [2024-07-15 20:35:47.015987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.772 [2024-07-15 20:35:47.016029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.772 [2024-07-15 20:35:47.016044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.772 [2024-07-15 20:35:47.028746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.772 [2024-07-15 20:35:47.028789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.772 [2024-07-15 20:35:47.028804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.772 [2024-07-15 20:35:47.043499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.772 [2024-07-15 20:35:47.043542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.772 [2024-07-15 20:35:47.043557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.772 [2024-07-15 20:35:47.055826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.772 [2024-07-15 20:35:47.055883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.772 [2024-07-15 20:35:47.055899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.772 [2024-07-15 20:35:47.070236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.772 [2024-07-15 20:35:47.070279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.772 [2024-07-15 20:35:47.070293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.772 [2024-07-15 20:35:47.083972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.772 [2024-07-15 20:35:47.084018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.772 [2024-07-15 20:35:47.084038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.772 [2024-07-15 20:35:47.099637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.772 [2024-07-15 20:35:47.099683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.772 [2024-07-15 20:35:47.099699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.772 [2024-07-15 20:35:47.111814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.772 [2024-07-15 20:35:47.111859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.772 [2024-07-15 20:35:47.111896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.772 [2024-07-15 20:35:47.127532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.772 [2024-07-15 20:35:47.127577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.772 [2024-07-15 20:35:47.127592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.772 [2024-07-15 20:35:47.143253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.772 [2024-07-15 20:35:47.143296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.772 [2024-07-15 20:35:47.143311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.772 [2024-07-15 20:35:47.158335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22713e0) 00:19:25.772 [2024-07-15 20:35:47.158377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.772 [2024-07-15 20:35:47.158391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.772 00:19:25.772 Latency(us) 00:19:25.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.772 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:25.772 nvme0n1 : 2.01 17072.23 66.69 0.00 0.00 7488.28 3455.53 24784.52 00:19:25.773 =================================================================================================================== 00:19:25.773 Total : 17072.23 66.69 0.00 0.00 7488.28 3455.53 24784.52 00:19:25.773 0 00:19:25.773 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:25.773 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:25.773 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:25.773 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:25.773 | .driver_specific 00:19:25.773 | .nvme_error 00:19:25.773 | .status_code 00:19:25.773 | .command_transient_transport_error' 00:19:26.030 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 134 > 0 )) 00:19:26.030 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93442 00:19:26.030 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93442 ']' 00:19:26.030 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93442 00:19:26.030 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:26.030 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:26.030 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93442 00:19:26.288 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:26.288 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:26.288 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93442' 00:19:26.288 killing process with pid 93442 00:19:26.288 Received shutdown signal, test time was about 2.000000 seconds 00:19:26.288 00:19:26.288 Latency(us) 00:19:26.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.288 =================================================================================================================== 00:19:26.288 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:26.288 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93442 00:19:26.288 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93442 00:19:26.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:26.288 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:19:26.288 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:26.288 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:19:26.288 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:26.288 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:26.288 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93537 00:19:26.288 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93537 /var/tmp/bperf.sock 00:19:26.288 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93537 ']' 00:19:26.288 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:19:26.288 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:26.288 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:26.288 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:26.288 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:26.288 20:35:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:26.288 [2024-07-15 20:35:47.756483] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:26.288 [2024-07-15 20:35:47.756774] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93537 ] 00:19:26.288 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:26.288 Zero copy mechanism will not be used. 00:19:26.547 [2024-07-15 20:35:47.894007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.547 [2024-07-15 20:35:47.954061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.480 20:35:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:27.480 20:35:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:27.480 20:35:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:27.480 20:35:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:27.738 20:35:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:27.738 20:35:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.738 20:35:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:27.738 20:35:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.738 20:35:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:27.738 20:35:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:27.995 nvme0n1 00:19:28.253 20:35:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:28.253 20:35:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.253 20:35:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:28.253 20:35:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.253 20:35:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:28.253 20:35:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:28.253 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:28.253 Zero copy mechanism will not be used. 00:19:28.253 Running I/O for 2 seconds... 00:19:28.253 [2024-07-15 20:35:49.668384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.253 [2024-07-15 20:35:49.668679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.253 [2024-07-15 20:35:49.668724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.253 [2024-07-15 20:35:49.673740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.253 [2024-07-15 20:35:49.673809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.253 [2024-07-15 20:35:49.673826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.253 [2024-07-15 20:35:49.679382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.253 [2024-07-15 20:35:49.679477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.253 [2024-07-15 20:35:49.679496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.253 [2024-07-15 20:35:49.684486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.253 [2024-07-15 20:35:49.684535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.253 [2024-07-15 20:35:49.684551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.253 [2024-07-15 20:35:49.690026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.253 [2024-07-15 20:35:49.690084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.253 [2024-07-15 20:35:49.690101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.253 [2024-07-15 20:35:49.693096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.254 [2024-07-15 20:35:49.693144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.254 [2024-07-15 20:35:49.693160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.254 [2024-07-15 20:35:49.697922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.254 [2024-07-15 20:35:49.697973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.254 [2024-07-15 20:35:49.697989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.254 [2024-07-15 20:35:49.702294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.254 [2024-07-15 20:35:49.702365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.254 [2024-07-15 20:35:49.702382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.254 [2024-07-15 20:35:49.707184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.254 [2024-07-15 20:35:49.707232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.254 [2024-07-15 20:35:49.707248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.254 [2024-07-15 20:35:49.712796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.254 [2024-07-15 20:35:49.712860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.254 [2024-07-15 20:35:49.712889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.254 [2024-07-15 20:35:49.716097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.254 [2024-07-15 20:35:49.716147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.254 [2024-07-15 20:35:49.716165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.254 [2024-07-15 20:35:49.721692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.254 [2024-07-15 20:35:49.721789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.254 [2024-07-15 20:35:49.721816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.254 [2024-07-15 20:35:49.727973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.254 [2024-07-15 20:35:49.728045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.254 [2024-07-15 20:35:49.728063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.254 [2024-07-15 20:35:49.731534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.254 [2024-07-15 20:35:49.731592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.254 [2024-07-15 20:35:49.731609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.254 [2024-07-15 20:35:49.739165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.254 [2024-07-15 20:35:49.739267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.254 [2024-07-15 20:35:49.739293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.254 [2024-07-15 20:35:49.745341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.254 [2024-07-15 20:35:49.745434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.254 [2024-07-15 20:35:49.745460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.254 [2024-07-15 20:35:49.752228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.254 [2024-07-15 20:35:49.752315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.254 [2024-07-15 20:35:49.752339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.513 [2024-07-15 20:35:49.759618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.513 [2024-07-15 20:35:49.759706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.513 [2024-07-15 20:35:49.759730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.513 [2024-07-15 20:35:49.766717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.513 [2024-07-15 20:35:49.766800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.513 [2024-07-15 20:35:49.766822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.773640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.773717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.773734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.777826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.777922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.777948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.782814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.782909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.782927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.786888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.786958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.786976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.791634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.791704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.791721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.795838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.795927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.795945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.799790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.799847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.799862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.804570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.804619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.804635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.809257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.809311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.809326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.812513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.812555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.812570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.817011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.817068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.817085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.821969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.822034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.822050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.827684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.827758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.827774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.831853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.831927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.831943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.836943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.837027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.837048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.843066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.843156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.843180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.849463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.849542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.849568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.853479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.853561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.853579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.859054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.859124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.859140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.864207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.864276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.864292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.869713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.869784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.869801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.874902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.874973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.874989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.880293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.880376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.880404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.886056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.886128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.886144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.891397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.891471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.891489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.896677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.896759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.896776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.901286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.901331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.901346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.905645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.905710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.905726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.514 [2024-07-15 20:35:49.911762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.514 [2024-07-15 20:35:49.911857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.514 [2024-07-15 20:35:49.911896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.515 [2024-07-15 20:35:49.917242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.515 [2024-07-15 20:35:49.917335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.515 [2024-07-15 20:35:49.917360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.515 [2024-07-15 20:35:49.920978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.515 [2024-07-15 20:35:49.921046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.515 [2024-07-15 20:35:49.921062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.515 [2024-07-15 20:35:49.925401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.515 [2024-07-15 20:35:49.925485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.515 [2024-07-15 20:35:49.925507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.515 [2024-07-15 20:35:49.931898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.515 [2024-07-15 20:35:49.931973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.515 [2024-07-15 20:35:49.931990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.515 [2024-07-15 20:35:49.937261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.515 [2024-07-15 20:35:49.937340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.515 [2024-07-15 20:35:49.937357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.515 [2024-07-15 20:35:49.945391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.515 [2024-07-15 20:35:49.945481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.515 [2024-07-15 20:35:49.945511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.515 [2024-07-15 20:35:49.952050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.515 [2024-07-15 20:35:49.952148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.515 [2024-07-15 20:35:49.952176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.515 [2024-07-15 20:35:49.957978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.515 [2024-07-15 20:35:49.958086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.515 [2024-07-15 20:35:49.958116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.515 [2024-07-15 20:35:49.963935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.515 [2024-07-15 20:35:49.964011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.515 [2024-07-15 20:35:49.964027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.515 [2024-07-15 20:35:49.969738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.515 [2024-07-15 20:35:49.969812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.515 [2024-07-15 20:35:49.969830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.515 [2024-07-15 20:35:49.973898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.515 [2024-07-15 20:35:49.973979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.515 [2024-07-15 20:35:49.974000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.515 [2024-07-15 20:35:49.978739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.515 [2024-07-15 20:35:49.978862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.515 [2024-07-15 20:35:49.978913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.515 [2024-07-15 20:35:49.983932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.515 [2024-07-15 20:35:49.984019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.515 [2024-07-15 20:35:49.984037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.515 [2024-07-15 20:35:49.988864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.515 [2024-07-15 20:35:49.988996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.515 [2024-07-15 20:35:49.989014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.515 [2024-07-15 20:35:49.995231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.515 [2024-07-15 20:35:49.995320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.515 [2024-07-15 20:35:49.995338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.515 [2024-07-15 20:35:49.998977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.515 [2024-07-15 20:35:49.999039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.515 [2024-07-15 20:35:49.999056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.515 [2024-07-15 20:35:50.004747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.515 [2024-07-15 20:35:50.004848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.515 [2024-07-15 20:35:50.004887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.515 [2024-07-15 20:35:50.010714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.515 [2024-07-15 20:35:50.010786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.515 [2024-07-15 20:35:50.010804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.775 [2024-07-15 20:35:50.017811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.775 [2024-07-15 20:35:50.017898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.775 [2024-07-15 20:35:50.017920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.775 [2024-07-15 20:35:50.022433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.775 [2024-07-15 20:35:50.022539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.775 [2024-07-15 20:35:50.022561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.775 [2024-07-15 20:35:50.028057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.775 [2024-07-15 20:35:50.028158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.775 [2024-07-15 20:35:50.028175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.775 [2024-07-15 20:35:50.033822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.775 [2024-07-15 20:35:50.033943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.775 [2024-07-15 20:35:50.033968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.775 [2024-07-15 20:35:50.037648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.775 [2024-07-15 20:35:50.037710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.775 [2024-07-15 20:35:50.037726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.775 [2024-07-15 20:35:50.042904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.775 [2024-07-15 20:35:50.042981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.775 [2024-07-15 20:35:50.042999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.775 [2024-07-15 20:35:50.047533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.775 [2024-07-15 20:35:50.047577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.775 [2024-07-15 20:35:50.047592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.775 [2024-07-15 20:35:50.051188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.775 [2024-07-15 20:35:50.051255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.775 [2024-07-15 20:35:50.051271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.775 [2024-07-15 20:35:50.056619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.775 [2024-07-15 20:35:50.056706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.775 [2024-07-15 20:35:50.056727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.775 [2024-07-15 20:35:50.062395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.775 [2024-07-15 20:35:50.062467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.775 [2024-07-15 20:35:50.062483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.775 [2024-07-15 20:35:50.066256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.775 [2024-07-15 20:35:50.066300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.775 [2024-07-15 20:35:50.066316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.775 [2024-07-15 20:35:50.071241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.775 [2024-07-15 20:35:50.071321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.775 [2024-07-15 20:35:50.071339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.775 [2024-07-15 20:35:50.077008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.775 [2024-07-15 20:35:50.077083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.077100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.082774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.082861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.082893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.086309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.086383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.086405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.091741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.091816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.091833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.097291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.097374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.097391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.101224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.101285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.101302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.106487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.106561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.106577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.111996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.112066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.112082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.117363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.117454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.117481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.122499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.122570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.122587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.128278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.128368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.128393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.133693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.133774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.133790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.137245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.137310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.137327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.141939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.142006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.142022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.146414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.146460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.146475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.150909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.150988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.151008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.154931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.154993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.155009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.161059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.161150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.161167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.165161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.165223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.165239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.170482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.170554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.170570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.174677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.174748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.174765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.179596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.179671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.179688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.185499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.185606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.185634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.192542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.192612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.192629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.196675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.196771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.196789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.202189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.202265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.202281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.207749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.207825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.207842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.212921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.212985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.213001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.215985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.216026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.776 [2024-07-15 20:35:50.216040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.776 [2024-07-15 20:35:50.221462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.776 [2024-07-15 20:35:50.221534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.777 [2024-07-15 20:35:50.221550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.777 [2024-07-15 20:35:50.226214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.777 [2024-07-15 20:35:50.226288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.777 [2024-07-15 20:35:50.226305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.777 [2024-07-15 20:35:50.230392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.777 [2024-07-15 20:35:50.230456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.777 [2024-07-15 20:35:50.230474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.777 [2024-07-15 20:35:50.235925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.777 [2024-07-15 20:35:50.236013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.777 [2024-07-15 20:35:50.236034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.777 [2024-07-15 20:35:50.241764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.777 [2024-07-15 20:35:50.241839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.777 [2024-07-15 20:35:50.241859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.777 [2024-07-15 20:35:50.247551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.777 [2024-07-15 20:35:50.247618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.777 [2024-07-15 20:35:50.247636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:28.777 [2024-07-15 20:35:50.252178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.777 [2024-07-15 20:35:50.252451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.777 [2024-07-15 20:35:50.252639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:28.777 [2024-07-15 20:35:50.258119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.777 [2024-07-15 20:35:50.258441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.777 [2024-07-15 20:35:50.258733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:28.777 [2024-07-15 20:35:50.264678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.777 [2024-07-15 20:35:50.265045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.777 [2024-07-15 20:35:50.265328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:28.777 [2024-07-15 20:35:50.271026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:28.777 [2024-07-15 20:35:50.271300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.777 [2024-07-15 20:35:50.271450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.037 [2024-07-15 20:35:50.277177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.037 [2024-07-15 20:35:50.277437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.037 [2024-07-15 20:35:50.277648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.037 [2024-07-15 20:35:50.282554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.037 [2024-07-15 20:35:50.282623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.037 [2024-07-15 20:35:50.282643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.037 [2024-07-15 20:35:50.286907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.037 [2024-07-15 20:35:50.286996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.037 [2024-07-15 20:35:50.287017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.037 [2024-07-15 20:35:50.292087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.037 [2024-07-15 20:35:50.292167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.037 [2024-07-15 20:35:50.292188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.037 [2024-07-15 20:35:50.297847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.037 [2024-07-15 20:35:50.297932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.037 [2024-07-15 20:35:50.297951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.037 [2024-07-15 20:35:50.303425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.037 [2024-07-15 20:35:50.303510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.037 [2024-07-15 20:35:50.303532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.037 [2024-07-15 20:35:50.307017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.037 [2024-07-15 20:35:50.307095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.037 [2024-07-15 20:35:50.307117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.037 [2024-07-15 20:35:50.311955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.037 [2024-07-15 20:35:50.312040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.037 [2024-07-15 20:35:50.312060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.037 [2024-07-15 20:35:50.316804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.037 [2024-07-15 20:35:50.316910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.037 [2024-07-15 20:35:50.316936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.037 [2024-07-15 20:35:50.322277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.037 [2024-07-15 20:35:50.322366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.037 [2024-07-15 20:35:50.322387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.037 [2024-07-15 20:35:50.328746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.037 [2024-07-15 20:35:50.328818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.037 [2024-07-15 20:35:50.328835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.037 [2024-07-15 20:35:50.332232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.037 [2024-07-15 20:35:50.332276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.037 [2024-07-15 20:35:50.332292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.037 [2024-07-15 20:35:50.336811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.037 [2024-07-15 20:35:50.336901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.037 [2024-07-15 20:35:50.336926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.037 [2024-07-15 20:35:50.342156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.037 [2024-07-15 20:35:50.342217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.037 [2024-07-15 20:35:50.342235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.037 [2024-07-15 20:35:50.346903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.037 [2024-07-15 20:35:50.346959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.037 [2024-07-15 20:35:50.346978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.037 [2024-07-15 20:35:50.353085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.037 [2024-07-15 20:35:50.353168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.037 [2024-07-15 20:35:50.353191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.037 [2024-07-15 20:35:50.359417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.037 [2024-07-15 20:35:50.359501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.037 [2024-07-15 20:35:50.359527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.037 [2024-07-15 20:35:50.365379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.037 [2024-07-15 20:35:50.365623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.037 [2024-07-15 20:35:50.365900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.371271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.371361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.371382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.376328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.376389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.376405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.380307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.380367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.380390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.385950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.386025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.386043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.390073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.390150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.390166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.394947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.395065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.395091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.401072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.401148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.401164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.404951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.405027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.405043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.410398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.410509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.410532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.415093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.415184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.415202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.420334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.420422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.420439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.427278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.427363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.427380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.433139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.433214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.433230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.436976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.437049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.437066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.441614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.441692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.441713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.447706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.447797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.447820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.453513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.453597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.453614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.460484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.460584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.460604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.466012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.466057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.466072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.474065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.474137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.474155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.480348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.480430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.480447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.486662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.486737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.486754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.492364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.492437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.492454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.499043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.499115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.499132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.503049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.503100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.503115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.508399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.508485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.508507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.514262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.514332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.514349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.520644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.520727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.038 [2024-07-15 20:35:50.520748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.038 [2024-07-15 20:35:50.524544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.038 [2024-07-15 20:35:50.524596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.039 [2024-07-15 20:35:50.524613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.039 [2024-07-15 20:35:50.529130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.039 [2024-07-15 20:35:50.529193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.039 [2024-07-15 20:35:50.529211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.039 [2024-07-15 20:35:50.534545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.039 [2024-07-15 20:35:50.534642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.039 [2024-07-15 20:35:50.534661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.298 [2024-07-15 20:35:50.539591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.298 [2024-07-15 20:35:50.539686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-07-15 20:35:50.539704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.298 [2024-07-15 20:35:50.543699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.298 [2024-07-15 20:35:50.543772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-07-15 20:35:50.543789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.298 [2024-07-15 20:35:50.549343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.298 [2024-07-15 20:35:50.549419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-07-15 20:35:50.549436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.298 [2024-07-15 20:35:50.556066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.298 [2024-07-15 20:35:50.556174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-07-15 20:35:50.556203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.298 [2024-07-15 20:35:50.562126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.298 [2024-07-15 20:35:50.562203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-07-15 20:35:50.562221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.298 [2024-07-15 20:35:50.565638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.298 [2024-07-15 20:35:50.565709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-07-15 20:35:50.565725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.298 [2024-07-15 20:35:50.570161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.298 [2024-07-15 20:35:50.570235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-07-15 20:35:50.570259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.298 [2024-07-15 20:35:50.575815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.298 [2024-07-15 20:35:50.575887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-07-15 20:35:50.575905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.298 [2024-07-15 20:35:50.579597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.298 [2024-07-15 20:35:50.579652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-07-15 20:35:50.579667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.298 [2024-07-15 20:35:50.584771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.298 [2024-07-15 20:35:50.584853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-07-15 20:35:50.584900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.298 [2024-07-15 20:35:50.590762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.298 [2024-07-15 20:35:50.590839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.298 [2024-07-15 20:35:50.590858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.595559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.595656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.595682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.602457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.602554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.602580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.606141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.606186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.606202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.612963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.613059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.613096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.617275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.617335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.617355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.622360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.622445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.622471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.630387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.630494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.630522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.636545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.636628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.636648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.642745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.642836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.642863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.647362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.647428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.647451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.652703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.652782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.652810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.657854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.657949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.657969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.663497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.663556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.663573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.668138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.668245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.668263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.673379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.673453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.673469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.677986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.678104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.678126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.682840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.682926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.682951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.688498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.688593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.688616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.692530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.692639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.692656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.699045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.699145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.699164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.704563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.704666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.704715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.710650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.710751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.710779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.718958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.719046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.719070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.723500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.723560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.723580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.730736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.730807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.730823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.738710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.738807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.738835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.746756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.746825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.746848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.753506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.753582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.753604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.761326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.299 [2024-07-15 20:35:50.761420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.299 [2024-07-15 20:35:50.761442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.299 [2024-07-15 20:35:50.766830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.300 [2024-07-15 20:35:50.766915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-07-15 20:35:50.766932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.300 [2024-07-15 20:35:50.771155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.300 [2024-07-15 20:35:50.771199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-07-15 20:35:50.771213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.300 [2024-07-15 20:35:50.777627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.300 [2024-07-15 20:35:50.777728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-07-15 20:35:50.777749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.300 [2024-07-15 20:35:50.784549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.300 [2024-07-15 20:35:50.784653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-07-15 20:35:50.784678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.300 [2024-07-15 20:35:50.789858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.300 [2024-07-15 20:35:50.789980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-07-15 20:35:50.790006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.300 [2024-07-15 20:35:50.795308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.300 [2024-07-15 20:35:50.795388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.300 [2024-07-15 20:35:50.795408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.560 [2024-07-15 20:35:50.801625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.560 [2024-07-15 20:35:50.801709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.560 [2024-07-15 20:35:50.801727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.560 [2024-07-15 20:35:50.807252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.560 [2024-07-15 20:35:50.807325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.560 [2024-07-15 20:35:50.807340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.560 [2024-07-15 20:35:50.811162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.560 [2024-07-15 20:35:50.811226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.560 [2024-07-15 20:35:50.811242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.560 [2024-07-15 20:35:50.816479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.560 [2024-07-15 20:35:50.816537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.560 [2024-07-15 20:35:50.816553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.560 [2024-07-15 20:35:50.820899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.560 [2024-07-15 20:35:50.820974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.560 [2024-07-15 20:35:50.820994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.560 [2024-07-15 20:35:50.826357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.560 [2024-07-15 20:35:50.826429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.560 [2024-07-15 20:35:50.826445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.560 [2024-07-15 20:35:50.831774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.560 [2024-07-15 20:35:50.831860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.560 [2024-07-15 20:35:50.831907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.560 [2024-07-15 20:35:50.836014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.560 [2024-07-15 20:35:50.836092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.560 [2024-07-15 20:35:50.836112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.560 [2024-07-15 20:35:50.841413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.560 [2024-07-15 20:35:50.841460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.560 [2024-07-15 20:35:50.841475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.560 [2024-07-15 20:35:50.846534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.560 [2024-07-15 20:35:50.846585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.560 [2024-07-15 20:35:50.846600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.560 [2024-07-15 20:35:50.851507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.560 [2024-07-15 20:35:50.851579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.560 [2024-07-15 20:35:50.851595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.560 [2024-07-15 20:35:50.856568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.560 [2024-07-15 20:35:50.856642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.560 [2024-07-15 20:35:50.856664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.560 [2024-07-15 20:35:50.861038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.560 [2024-07-15 20:35:50.861102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.560 [2024-07-15 20:35:50.861117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.560 [2024-07-15 20:35:50.866593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.560 [2024-07-15 20:35:50.866689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.560 [2024-07-15 20:35:50.866712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.560 [2024-07-15 20:35:50.873388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.560 [2024-07-15 20:35:50.873485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.560 [2024-07-15 20:35:50.873513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.560 [2024-07-15 20:35:50.879161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.560 [2024-07-15 20:35:50.879237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.879252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.883753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.883824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.883839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.888578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.888650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.888665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.893303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.893390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.893406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.898435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.898505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.898522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.904539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.904613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.904628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.908088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.908129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.908144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.912223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.912289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.912303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.917729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.917801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.917816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.923035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.923104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.923120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.927466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.927539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.927555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.931401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.931458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.931472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.935933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.936001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.936016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.940548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.940632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.940651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.944766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.944828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.944842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.948819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.948891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.948908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.953467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.953533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.953548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.958228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.958290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.958306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.962416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.962479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.962495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.966521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.966583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.966606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.971829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.971907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.971922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.976600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.976707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.976735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.982161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.982251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.982276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.990051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.990149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.990175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:50.997695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:50.997794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:50.997821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:51.004998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:51.005101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:51.005128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:51.012622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:51.012738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:51.012766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:51.020634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:51.020759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:51.020785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:51.028378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:51.028480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:51.028507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.561 [2024-07-15 20:35:51.035797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.561 [2024-07-15 20:35:51.035919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.561 [2024-07-15 20:35:51.035944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.562 [2024-07-15 20:35:51.043373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.562 [2024-07-15 20:35:51.043478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.562 [2024-07-15 20:35:51.043506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.562 [2024-07-15 20:35:51.051144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.562 [2024-07-15 20:35:51.051240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.562 [2024-07-15 20:35:51.051268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.562 [2024-07-15 20:35:51.056857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.562 [2024-07-15 20:35:51.056924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.562 [2024-07-15 20:35:51.056940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.834 [2024-07-15 20:35:51.062251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.834 [2024-07-15 20:35:51.062319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.834 [2024-07-15 20:35:51.062336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.834 [2024-07-15 20:35:51.065611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.834 [2024-07-15 20:35:51.065657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.834 [2024-07-15 20:35:51.065672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.834 [2024-07-15 20:35:51.071287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.834 [2024-07-15 20:35:51.071354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.834 [2024-07-15 20:35:51.071370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.834 [2024-07-15 20:35:51.076507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.834 [2024-07-15 20:35:51.076575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.834 [2024-07-15 20:35:51.076590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.834 [2024-07-15 20:35:51.081769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.834 [2024-07-15 20:35:51.081841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.834 [2024-07-15 20:35:51.081857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.834 [2024-07-15 20:35:51.086620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.086692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.086707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.090226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.090269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.090285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.095729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.095798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.095814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.101381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.101457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.101473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.106433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.106512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.106529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.110978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.111054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.111073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.117262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.117346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.117367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.122929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.122997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.123012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.128498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.128552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.128568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.132289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.132373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.132397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.138132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.138221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.138246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.142473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.142551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.142576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.146821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.146902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.146919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.151746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.151831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.151851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.155796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.155856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.155886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.160050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.160112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.160128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.164754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.164822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.164837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.168535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.168590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.168605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.173143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.173207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.173233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.177109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.177165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.177179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.181189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.181248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.181264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.187194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.187262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.187278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.192727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.192796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.192812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.200030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.200117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.200133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.203885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.203942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.203958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.209543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.209610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.209625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.215007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.215073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.215089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.220110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.220178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.220194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.223811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.223885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.835 [2024-07-15 20:35:51.223902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.835 [2024-07-15 20:35:51.228465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.835 [2024-07-15 20:35:51.228531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-15 20:35:51.228557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.836 [2024-07-15 20:35:51.233778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.836 [2024-07-15 20:35:51.233852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-15 20:35:51.233883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.836 [2024-07-15 20:35:51.237555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.836 [2024-07-15 20:35:51.237614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-15 20:35:51.237629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.836 [2024-07-15 20:35:51.242241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.836 [2024-07-15 20:35:51.242308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-15 20:35:51.242325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.836 [2024-07-15 20:35:51.246785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.836 [2024-07-15 20:35:51.246853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-15 20:35:51.246890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.836 [2024-07-15 20:35:51.254437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.836 [2024-07-15 20:35:51.254530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-15 20:35:51.254550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.836 [2024-07-15 20:35:51.259259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.836 [2024-07-15 20:35:51.259335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-15 20:35:51.259359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.836 [2024-07-15 20:35:51.265584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.836 [2024-07-15 20:35:51.265683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-15 20:35:51.265712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.836 [2024-07-15 20:35:51.273083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.836 [2024-07-15 20:35:51.273173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-15 20:35:51.273198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.836 [2024-07-15 20:35:51.278507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.836 [2024-07-15 20:35:51.278594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-15 20:35:51.278621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.836 [2024-07-15 20:35:51.285138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.836 [2024-07-15 20:35:51.285234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-15 20:35:51.285263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.836 [2024-07-15 20:35:51.291583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.836 [2024-07-15 20:35:51.291678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-15 20:35:51.291697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.836 [2024-07-15 20:35:51.297408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.836 [2024-07-15 20:35:51.297504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-15 20:35:51.297523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.836 [2024-07-15 20:35:51.305251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.836 [2024-07-15 20:35:51.305345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-15 20:35:51.305368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:29.836 [2024-07-15 20:35:51.310307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.836 [2024-07-15 20:35:51.310359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-15 20:35:51.310377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:29.836 [2024-07-15 20:35:51.316402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.836 [2024-07-15 20:35:51.316498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-15 20:35:51.316523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:29.836 [2024-07-15 20:35:51.323116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.836 [2024-07-15 20:35:51.323216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-15 20:35:51.323238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:29.836 [2024-07-15 20:35:51.330403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:29.836 [2024-07-15 20:35:51.330500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:29.836 [2024-07-15 20:35:51.330523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.102 [2024-07-15 20:35:51.336184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.102 [2024-07-15 20:35:51.336271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.102 [2024-07-15 20:35:51.336290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.102 [2024-07-15 20:35:51.339968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.102 [2024-07-15 20:35:51.340021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.102 [2024-07-15 20:35:51.340039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.102 [2024-07-15 20:35:51.346266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.102 [2024-07-15 20:35:51.346349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.102 [2024-07-15 20:35:51.346375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.102 [2024-07-15 20:35:51.353388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.102 [2024-07-15 20:35:51.353462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.102 [2024-07-15 20:35:51.353487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.102 [2024-07-15 20:35:51.358463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.102 [2024-07-15 20:35:51.358547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.102 [2024-07-15 20:35:51.358570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.102 [2024-07-15 20:35:51.361946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.102 [2024-07-15 20:35:51.361999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.102 [2024-07-15 20:35:51.362015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.102 [2024-07-15 20:35:51.366498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.102 [2024-07-15 20:35:51.366577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.102 [2024-07-15 20:35:51.366595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.102 [2024-07-15 20:35:51.370938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.102 [2024-07-15 20:35:51.370997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.102 [2024-07-15 20:35:51.371013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.102 [2024-07-15 20:35:51.375579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.102 [2024-07-15 20:35:51.375621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.102 [2024-07-15 20:35:51.375636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.380835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.380908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.380924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.384980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.385021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.385037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.389389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.389446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.389462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.394691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.394754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.394769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.398322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.398366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.398381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.402923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.402981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.402996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.409083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.409153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.409169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.413151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.413214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.413237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.419777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.419883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.419911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.427337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.427425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.427450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.437299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.437388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.437413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.447718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.447807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.447831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.457629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.457715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.457739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.467706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.467776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.467799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.477768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.477854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.477894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.488268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.488360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.488385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.498930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.499023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.499044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.509389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.509488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.509511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.519387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.519475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.519498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.530234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.530325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.530350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.540393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.540485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.540512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.550541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.550620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.550645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.560645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.560755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.560781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.571057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.571147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.571170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.581769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.581862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.581904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.103 [2024-07-15 20:35:51.592334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.103 [2024-07-15 20:35:51.592423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.103 [2024-07-15 20:35:51.592446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.362 [2024-07-15 20:35:51.602984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.362 [2024-07-15 20:35:51.603076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.362 [2024-07-15 20:35:51.603102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.362 [2024-07-15 20:35:51.613233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.362 [2024-07-15 20:35:51.613320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.362 [2024-07-15 20:35:51.613342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.362 [2024-07-15 20:35:51.623386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.362 [2024-07-15 20:35:51.623465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.362 [2024-07-15 20:35:51.623489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.362 [2024-07-15 20:35:51.633756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.362 [2024-07-15 20:35:51.633838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.362 [2024-07-15 20:35:51.633862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:30.362 [2024-07-15 20:35:51.643568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.362 [2024-07-15 20:35:51.643660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.362 [2024-07-15 20:35:51.643684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:30.362 [2024-07-15 20:35:51.653747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.362 [2024-07-15 20:35:51.653837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.362 [2024-07-15 20:35:51.653861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:30.362 [2024-07-15 20:35:51.663970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x246b380) 00:19:30.362 [2024-07-15 20:35:51.664044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:30.362 [2024-07-15 20:35:51.664068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:30.362 00:19:30.362 Latency(us) 00:19:30.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.362 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:30.362 nvme0n1 : 2.00 5483.93 685.49 0.00 0.00 2911.48 662.81 10902.81 00:19:30.362 =================================================================================================================== 00:19:30.362 Total : 5483.93 685.49 0.00 0.00 2911.48 662.81 10902.81 00:19:30.362 0 00:19:30.362 20:35:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:30.362 20:35:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:30.362 20:35:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:30.362 | .driver_specific 00:19:30.362 | .nvme_error 00:19:30.362 | .status_code 00:19:30.362 | .command_transient_transport_error' 00:19:30.362 20:35:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:30.620 20:35:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 354 > 0 )) 00:19:30.620 20:35:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93537 00:19:30.620 20:35:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93537 ']' 00:19:30.620 20:35:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93537 00:19:30.620 20:35:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:30.620 20:35:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:30.620 20:35:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93537 00:19:30.620 20:35:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:30.620 20:35:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:30.620 killing process with pid 93537 00:19:30.620 20:35:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93537' 00:19:30.620 20:35:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93537 00:19:30.620 Received shutdown signal, test time was about 2.000000 seconds 00:19:30.620 00:19:30.620 Latency(us) 00:19:30.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.620 =================================================================================================================== 00:19:30.620 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:30.620 20:35:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93537 00:19:30.877 20:35:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:19:30.877 20:35:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:30.877 20:35:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:30.877 20:35:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:19:30.877 20:35:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:19:30.877 20:35:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93623 00:19:30.877 20:35:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93623 /var/tmp/bperf.sock 00:19:30.877 20:35:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:19:30.877 20:35:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93623 ']' 00:19:30.877 20:35:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:30.877 20:35:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:30.877 20:35:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:30.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:30.877 20:35:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:30.877 20:35:52 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:30.877 [2024-07-15 20:35:52.268057] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:30.877 [2024-07-15 20:35:52.268162] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93623 ] 00:19:31.134 [2024-07-15 20:35:52.417541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.134 [2024-07-15 20:35:52.502656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.068 20:35:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:32.068 20:35:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:32.068 20:35:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:32.068 20:35:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:32.326 20:35:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:32.326 20:35:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.326 20:35:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:32.326 20:35:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.326 20:35:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:32.326 20:35:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:32.585 nvme0n1 00:19:32.585 20:35:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:19:32.585 20:35:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.585 20:35:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:32.585 20:35:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.585 20:35:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:32.585 20:35:53 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:32.585 Running I/O for 2 seconds... 00:19:32.844 [2024-07-15 20:35:54.103725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190ee5c8 00:19:32.844 [2024-07-15 20:35:54.104705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.844 [2024-07-15 20:35:54.104770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:32.844 [2024-07-15 20:35:54.118041] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190ec408 00:19:32.844 [2024-07-15 20:35:54.119674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.844 [2024-07-15 20:35:54.119725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:32.844 [2024-07-15 20:35:54.127084] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190fef90 00:19:32.844 [2024-07-15 20:35:54.127906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.844 [2024-07-15 20:35:54.127957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:32.844 [2024-07-15 20:35:54.142111] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f4298 00:19:32.844 [2024-07-15 20:35:54.143415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.844 [2024-07-15 20:35:54.143466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:32.844 [2024-07-15 20:35:54.153689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190fd640 00:19:32.844 [2024-07-15 20:35:54.154825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.844 [2024-07-15 20:35:54.154890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:32.844 [2024-07-15 20:35:54.168324] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e8088 00:19:32.844 [2024-07-15 20:35:54.170307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.844 [2024-07-15 20:35:54.170357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:32.844 [2024-07-15 20:35:54.177116] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f1430 00:19:32.844 [2024-07-15 20:35:54.178071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.844 [2024-07-15 20:35:54.178114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:32.844 [2024-07-15 20:35:54.191745] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e1710 00:19:32.844 [2024-07-15 20:35:54.193300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.844 [2024-07-15 20:35:54.193353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:32.844 [2024-07-15 20:35:54.203257] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190de8a8 00:19:32.844 [2024-07-15 20:35:54.204569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.844 [2024-07-15 20:35:54.204624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:32.844 [2024-07-15 20:35:54.214755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f6cc8 00:19:32.844 [2024-07-15 20:35:54.215937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.844 [2024-07-15 20:35:54.215985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:32.844 [2024-07-15 20:35:54.226353] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190fb048 00:19:32.844 [2024-07-15 20:35:54.227357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.844 [2024-07-15 20:35:54.227406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:32.844 [2024-07-15 20:35:54.237782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190fcdd0 00:19:32.844 [2024-07-15 20:35:54.238625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.844 [2024-07-15 20:35:54.238668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:32.844 [2024-07-15 20:35:54.252961] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e9168 00:19:32.844 [2024-07-15 20:35:54.254781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.844 [2024-07-15 20:35:54.254824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:32.844 [2024-07-15 20:35:54.261739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e6b70 00:19:32.844 [2024-07-15 20:35:54.262729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.844 [2024-07-15 20:35:54.262773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:32.844 [2024-07-15 20:35:54.276322] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190ec408 00:19:32.844 [2024-07-15 20:35:54.278003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.844 [2024-07-15 20:35:54.278050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:32.844 [2024-07-15 20:35:54.287641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190fcdd0 00:19:32.844 [2024-07-15 20:35:54.289172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.844 [2024-07-15 20:35:54.289217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:32.844 [2024-07-15 20:35:54.299495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190fac10 00:19:32.844 [2024-07-15 20:35:54.300881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.844 [2024-07-15 20:35:54.300927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:32.844 [2024-07-15 20:35:54.310790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190ef6a8 00:19:32.845 [2024-07-15 20:35:54.311978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.845 [2024-07-15 20:35:54.312024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:32.845 [2024-07-15 20:35:54.322594] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e49b0 00:19:32.845 [2024-07-15 20:35:54.323663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.845 [2024-07-15 20:35:54.323707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:32.845 [2024-07-15 20:35:54.337142] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e2c28 00:19:32.845 [2024-07-15 20:35:54.338883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:32.845 [2024-07-15 20:35:54.338934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:33.103 [2024-07-15 20:35:54.345784] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190fdeb0 00:19:33.103 [2024-07-15 20:35:54.346553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.103 [2024-07-15 20:35:54.346597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:33.103 [2024-07-15 20:35:54.360325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f8a50 00:19:33.103 [2024-07-15 20:35:54.361782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.103 [2024-07-15 20:35:54.361829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:33.103 [2024-07-15 20:35:54.372552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e7c50 00:19:33.103 [2024-07-15 20:35:54.373540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.103 [2024-07-15 20:35:54.373588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:33.103 [2024-07-15 20:35:54.384520] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f4298 00:19:33.103 [2024-07-15 20:35:54.385819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.103 [2024-07-15 20:35:54.385863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:33.103 [2024-07-15 20:35:54.395925] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190de038 00:19:33.103 [2024-07-15 20:35:54.397080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.103 [2024-07-15 20:35:54.397123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:33.103 [2024-07-15 20:35:54.407310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f81e0 00:19:33.103 [2024-07-15 20:35:54.408286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.103 [2024-07-15 20:35:54.408331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:33.103 [2024-07-15 20:35:54.418649] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e7818 00:19:33.103 [2024-07-15 20:35:54.419467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.103 [2024-07-15 20:35:54.419510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:33.103 [2024-07-15 20:35:54.430520] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e73e0 00:19:33.104 [2024-07-15 20:35:54.431506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.104 [2024-07-15 20:35:54.431551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:33.104 [2024-07-15 20:35:54.442702] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e1f80 00:19:33.104 [2024-07-15 20:35:54.443666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.104 [2024-07-15 20:35:54.443713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:33.104 [2024-07-15 20:35:54.455050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f6890 00:19:33.104 [2024-07-15 20:35:54.455701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.104 [2024-07-15 20:35:54.455749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:33.104 [2024-07-15 20:35:54.467010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f20d8 00:19:33.104 [2024-07-15 20:35:54.467990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.104 [2024-07-15 20:35:54.468034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:33.104 [2024-07-15 20:35:54.478371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190fbcf0 00:19:33.104 [2024-07-15 20:35:54.479203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.104 [2024-07-15 20:35:54.479248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:33.104 [2024-07-15 20:35:54.493524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e7818 00:19:33.104 [2024-07-15 20:35:54.495340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.104 [2024-07-15 20:35:54.495384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:33.104 [2024-07-15 20:35:54.502447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190dfdc0 00:19:33.104 [2024-07-15 20:35:54.503435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.104 [2024-07-15 20:35:54.503478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:33.104 [2024-07-15 20:35:54.517069] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190df550 00:19:33.104 [2024-07-15 20:35:54.518763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.104 [2024-07-15 20:35:54.518813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:33.104 [2024-07-15 20:35:54.528448] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f5be8 00:19:33.104 [2024-07-15 20:35:54.529955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.104 [2024-07-15 20:35:54.530002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:33.104 [2024-07-15 20:35:54.540212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e5ec8 00:19:33.104 [2024-07-15 20:35:54.541434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.104 [2024-07-15 20:35:54.541478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:33.104 [2024-07-15 20:35:54.551945] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e5ec8 00:19:33.104 [2024-07-15 20:35:54.552835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.104 [2024-07-15 20:35:54.552900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:33.104 [2024-07-15 20:35:54.563668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f0bc0 00:19:33.104 [2024-07-15 20:35:54.564425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.104 [2024-07-15 20:35:54.564477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:33.104 [2024-07-15 20:35:54.577467] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f7100 00:19:33.104 [2024-07-15 20:35:54.579044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.104 [2024-07-15 20:35:54.579092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:33.104 [2024-07-15 20:35:54.588611] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190eaab8 00:19:33.104 [2024-07-15 20:35:54.590147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.104 [2024-07-15 20:35:54.590196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.363 [2024-07-15 20:35:54.631651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e99d8 00:19:33.363 [2024-07-15 20:35:54.635878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.363 [2024-07-15 20:35:54.635930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:33.363 [2024-07-15 20:35:54.680202] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f0350 00:19:33.363 [2024-07-15 20:35:54.684247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.363 [2024-07-15 20:35:54.684298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:33.363 [2024-07-15 20:35:54.716597] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190ebfd0 00:19:33.363 [2024-07-15 20:35:54.717610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.363 [2024-07-15 20:35:54.717657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:33.363 [2024-07-15 20:35:54.731455] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190de038 00:19:33.363 [2024-07-15 20:35:54.733135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.363 [2024-07-15 20:35:54.733182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:33.363 [2024-07-15 20:35:54.742712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f8618 00:19:33.363 [2024-07-15 20:35:54.744272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.363 [2024-07-15 20:35:54.744318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:33.363 [2024-07-15 20:35:54.754462] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e6300 00:19:33.363 [2024-07-15 20:35:54.755661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.363 [2024-07-15 20:35:54.755703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:33.363 [2024-07-15 20:35:54.765816] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190ef6a8 00:19:33.363 [2024-07-15 20:35:54.766836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.363 [2024-07-15 20:35:54.766899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:33.363 [2024-07-15 20:35:54.777764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e3498 00:19:33.363 [2024-07-15 20:35:54.778950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.363 [2024-07-15 20:35:54.778997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:33.363 [2024-07-15 20:35:54.792207] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f2948 00:19:33.363 [2024-07-15 20:35:54.794059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.363 [2024-07-15 20:35:54.794107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:33.363 [2024-07-15 20:35:54.800808] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190df118 00:19:33.363 [2024-07-15 20:35:54.801689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.363 [2024-07-15 20:35:54.801733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:33.363 [2024-07-15 20:35:54.815079] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190fa3a0 00:19:33.363 [2024-07-15 20:35:54.816299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.363 [2024-07-15 20:35:54.816346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:33.363 [2024-07-15 20:35:54.828456] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190fa3a0 00:19:33.363 [2024-07-15 20:35:54.830314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.363 [2024-07-15 20:35:54.830359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:33.363 [2024-07-15 20:35:54.837061] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e3498 00:19:33.363 [2024-07-15 20:35:54.837950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.363 [2024-07-15 20:35:54.837995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:33.363 [2024-07-15 20:35:54.851446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190df550 00:19:33.363 [2024-07-15 20:35:54.853023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.363 [2024-07-15 20:35:54.853070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:54.862654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f57b0 00:19:33.623 [2024-07-15 20:35:54.864051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:54.864098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:54.874331] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f4f40 00:19:33.623 [2024-07-15 20:35:54.875595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:54.875640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:54.886435] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f96f8 00:19:33.623 [2024-07-15 20:35:54.887224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:54.887270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:54.898851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f1430 00:19:33.623 [2024-07-15 20:35:54.899796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:54.899844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:54.910396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f5378 00:19:33.623 [2024-07-15 20:35:54.911804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:54.911851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:54.922108] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e9e10 00:19:33.623 [2024-07-15 20:35:54.923399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:54.923445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:54.934210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190ed920 00:19:33.623 [2024-07-15 20:35:54.935501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:54.935544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:54.945617] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190ff3c8 00:19:33.623 [2024-07-15 20:35:54.946909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:54.946953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:54.957261] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f3e60 00:19:33.623 [2024-07-15 20:35:54.958426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:54.958473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:54.969354] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190dfdc0 00:19:33.623 [2024-07-15 20:35:54.970040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:54.970087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:54.983042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e7c50 00:19:33.623 [2024-07-15 20:35:54.984560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:54.984608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:54.994119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190ddc00 00:19:33.623 [2024-07-15 20:35:54.995772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:54.995819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:55.005970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f96f8 00:19:33.623 [2024-07-15 20:35:55.007336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:55.007379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:55.017186] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f31b8 00:19:33.623 [2024-07-15 20:35:55.018397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:55.018443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:55.028933] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f6020 00:19:33.623 [2024-07-15 20:35:55.030002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:55.030049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:55.041001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e0ea0 00:19:33.623 [2024-07-15 20:35:55.041590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:55.041635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:55.054642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f4298 00:19:33.623 [2024-07-15 20:35:55.056058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:55.056098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:55.065968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f9f68 00:19:33.623 [2024-07-15 20:35:55.067226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:55.067270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:55.077264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190efae0 00:19:33.623 [2024-07-15 20:35:55.078356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:55.078399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:55.088578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190de470 00:19:33.623 [2024-07-15 20:35:55.089551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:55.089594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:55.099866] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f5be8 00:19:33.623 [2024-07-15 20:35:55.100643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:55.100696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:33.623 [2024-07-15 20:35:55.114942] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f3e60 00:19:33.623 [2024-07-15 20:35:55.116727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.623 [2024-07-15 20:35:55.116774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.123793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f8a50 00:19:33.882 [2024-07-15 20:35:55.124768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.124814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.138293] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190ec408 00:19:33.882 [2024-07-15 20:35:55.139925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.139971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.147769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190fdeb0 00:19:33.882 [2024-07-15 20:35:55.148757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.148804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.162242] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e99d8 00:19:33.882 [2024-07-15 20:35:55.163719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.163764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.173560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190ebfd0 00:19:33.882 [2024-07-15 20:35:55.174864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.174925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.185523] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190ddc00 00:19:33.882 [2024-07-15 20:35:55.186818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.186861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.196586] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f6020 00:19:33.882 [2024-07-15 20:35:55.197896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.197937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.211038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190ed4e8 00:19:33.882 [2024-07-15 20:35:55.213003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.213050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.219683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190de470 00:19:33.882 [2024-07-15 20:35:55.220676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.220729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.234099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190fb480 00:19:33.882 [2024-07-15 20:35:55.235751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.235797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.245325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e2c28 00:19:33.882 [2024-07-15 20:35:55.246829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.246887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.257036] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f7970 00:19:33.882 [2024-07-15 20:35:55.258397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.258444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.268289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190ed4e8 00:19:33.882 [2024-07-15 20:35:55.269502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.269549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.279985] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f7da8 00:19:33.882 [2024-07-15 20:35:55.281078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.281121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.292147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e38d0 00:19:33.882 [2024-07-15 20:35:55.292737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.292784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.302899] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190dece0 00:19:33.882 [2024-07-15 20:35:55.303634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.303678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.317354] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190feb58 00:19:33.882 [2024-07-15 20:35:55.318762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.318805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.328541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e8d30 00:19:33.882 [2024-07-15 20:35:55.329823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:18577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.329882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.340264] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f0788 00:19:33.882 [2024-07-15 20:35:55.341412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.341456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.354781] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190eaab8 00:19:33.882 [2024-07-15 20:35:55.356596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.356642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.363464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f2510 00:19:33.882 [2024-07-15 20:35:55.364303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.364350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:33.882 [2024-07-15 20:35:55.378010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e3060 00:19:33.882 [2024-07-15 20:35:55.379519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:33.882 [2024-07-15 20:35:55.379564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.390229] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f5be8 00:19:34.141 [2024-07-15 20:35:55.391740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.391788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.401764] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190fc560 00:19:34.141 [2024-07-15 20:35:55.403298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.403345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.413744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190fda78 00:19:34.141 [2024-07-15 20:35:55.415150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.415202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.425255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f0bc0 00:19:34.141 [2024-07-15 20:35:55.426216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.426264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.436744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f2d80 00:19:34.141 [2024-07-15 20:35:55.437504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.437556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.451147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e12d8 00:19:34.141 [2024-07-15 20:35:55.452768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.452820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.463623] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e73e0 00:19:34.141 [2024-07-15 20:35:55.466150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.466210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.476544] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190fd208 00:19:34.141 [2024-07-15 20:35:55.477910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.477958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.491550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e7c50 00:19:34.141 [2024-07-15 20:35:55.493620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.493661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.500431] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e4140 00:19:34.141 [2024-07-15 20:35:55.501505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.501544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.515411] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190fe2e8 00:19:34.141 [2024-07-15 20:35:55.517143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.517184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.526239] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190fbcf0 00:19:34.141 [2024-07-15 20:35:55.528164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.528200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.539314] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e3060 00:19:34.141 [2024-07-15 20:35:55.540411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.540450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.550976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190eaab8 00:19:34.141 [2024-07-15 20:35:55.551855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.551918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.562635] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f20d8 00:19:34.141 [2024-07-15 20:35:55.563415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.563455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.575307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e49b0 00:19:34.141 [2024-07-15 20:35:55.576176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.576222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.586197] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e99d8 00:19:34.141 [2024-07-15 20:35:55.587261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.587298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.601273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e4de8 00:19:34.141 [2024-07-15 20:35:55.602984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.603028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.609939] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f20d8 00:19:34.141 [2024-07-15 20:35:55.610660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.610699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.624445] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e6738 00:19:34.141 [2024-07-15 20:35:55.625853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.625898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:34.141 [2024-07-15 20:35:55.635700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190ecc78 00:19:34.141 [2024-07-15 20:35:55.636785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.141 [2024-07-15 20:35:55.636822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.647486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e1f80 00:19:34.400 [2024-07-15 20:35:55.648594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.400 [2024-07-15 20:35:55.648630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.659755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e23b8 00:19:34.400 [2024-07-15 20:35:55.660882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.400 [2024-07-15 20:35:55.660913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.671257] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e6738 00:19:34.400 [2024-07-15 20:35:55.672213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.400 [2024-07-15 20:35:55.672245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.684007] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190ed4e8 00:19:34.400 [2024-07-15 20:35:55.685007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.400 [2024-07-15 20:35:55.685041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.696900] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190eea00 00:19:34.400 [2024-07-15 20:35:55.697854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.400 [2024-07-15 20:35:55.697897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.709882] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f7970 00:19:34.400 [2024-07-15 20:35:55.710988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.400 [2024-07-15 20:35:55.711023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.724184] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e2c28 00:19:34.400 [2024-07-15 20:35:55.726111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.400 [2024-07-15 20:35:55.726143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.733166] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e5a90 00:19:34.400 [2024-07-15 20:35:55.734119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.400 [2024-07-15 20:35:55.734148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.747856] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e4578 00:19:34.400 [2024-07-15 20:35:55.749353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.400 [2024-07-15 20:35:55.749392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.758969] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f35f0 00:19:34.400 [2024-07-15 20:35:55.760267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.400 [2024-07-15 20:35:55.760301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.773660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f0350 00:19:34.400 [2024-07-15 20:35:55.775619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.400 [2024-07-15 20:35:55.775657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.782268] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190dece0 00:19:34.400 [2024-07-15 20:35:55.783278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.400 [2024-07-15 20:35:55.783313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.794577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190ea248 00:19:34.400 [2024-07-15 20:35:55.795577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.400 [2024-07-15 20:35:55.795613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.807017] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e8088 00:19:34.400 [2024-07-15 20:35:55.807739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.400 [2024-07-15 20:35:55.807777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.822005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190fe2e8 00:19:34.400 [2024-07-15 20:35:55.823980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.400 [2024-07-15 20:35:55.824020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.831311] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e38d0 00:19:34.400 [2024-07-15 20:35:55.832426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.400 [2024-07-15 20:35:55.832465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.846835] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f7538 00:19:34.400 [2024-07-15 20:35:55.847819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.400 [2024-07-15 20:35:55.847853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.859842] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f7da8 00:19:34.400 [2024-07-15 20:35:55.860494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.400 [2024-07-15 20:35:55.860532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.873560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190fc560 00:19:34.400 [2024-07-15 20:35:55.875039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.400 [2024-07-15 20:35:55.875076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.884743] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190ea248 00:19:34.400 [2024-07-15 20:35:55.886149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.400 [2024-07-15 20:35:55.886183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:34.400 [2024-07-15 20:35:55.896512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e7818 00:19:34.401 [2024-07-15 20:35:55.897882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.401 [2024-07-15 20:35:55.897915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:34.659 [2024-07-15 20:35:55.909162] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e5ec8 00:19:34.659 [2024-07-15 20:35:55.910650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.659 [2024-07-15 20:35:55.910683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:34.659 [2024-07-15 20:35:55.921281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190eb760 00:19:34.659 [2024-07-15 20:35:55.922299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.659 [2024-07-15 20:35:55.922332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:34.659 [2024-07-15 20:35:55.933401] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e5658 00:19:34.659 [2024-07-15 20:35:55.934735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.659 [2024-07-15 20:35:55.934768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:34.659 [2024-07-15 20:35:55.944889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e1f80 00:19:34.659 [2024-07-15 20:35:55.946074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.659 [2024-07-15 20:35:55.946106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:34.659 [2024-07-15 20:35:55.959308] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190de470 00:19:34.659 [2024-07-15 20:35:55.961338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.659 [2024-07-15 20:35:55.961371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:34.659 [2024-07-15 20:35:55.967890] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e4578 00:19:34.659 [2024-07-15 20:35:55.968930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.659 [2024-07-15 20:35:55.968964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:34.659 [2024-07-15 20:35:55.979982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190df118 00:19:34.659 [2024-07-15 20:35:55.980533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.659 [2024-07-15 20:35:55.980565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:34.659 [2024-07-15 20:35:55.992622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e8d30 00:19:34.659 [2024-07-15 20:35:55.993363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.659 [2024-07-15 20:35:55.993397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:34.659 [2024-07-15 20:35:56.003605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e73e0 00:19:34.659 [2024-07-15 20:35:56.004373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.659 [2024-07-15 20:35:56.004410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:34.659 [2024-07-15 20:35:56.018708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190fc128 00:19:34.659 [2024-07-15 20:35:56.020440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.659 [2024-07-15 20:35:56.020476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:34.659 [2024-07-15 20:35:56.030533] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190f0bc0 00:19:34.659 [2024-07-15 20:35:56.032279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.659 [2024-07-15 20:35:56.032311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:34.659 [2024-07-15 20:35:56.043192] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190e6fa8 00:19:34.659 [2024-07-15 20:35:56.045112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.659 [2024-07-15 20:35:56.045146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:34.659 [2024-07-15 20:35:56.051746] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190eaef0 00:19:34.659 [2024-07-15 20:35:56.052502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.659 [2024-07-15 20:35:56.052533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:34.659 [2024-07-15 20:35:56.065112] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190dece0 00:19:34.659 [2024-07-15 20:35:56.066355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.659 [2024-07-15 20:35:56.066388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:34.660 [2024-07-15 20:35:56.076577] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39880) with pdu=0x2000190edd58 00:19:34.660 [2024-07-15 20:35:56.077682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:34.660 [2024-07-15 20:35:56.077715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:34.660 00:19:34.660 Latency(us) 00:19:34.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.660 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:34.660 nvme0n1 : 2.00 19952.65 77.94 0.00 0.00 6408.02 2472.49 49569.05 00:19:34.660 =================================================================================================================== 00:19:34.660 Total : 19952.65 77.94 0.00 0.00 6408.02 2472.49 49569.05 00:19:34.660 0 00:19:34.660 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:34.660 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:34.660 | .driver_specific 00:19:34.660 | .nvme_error 00:19:34.660 | .status_code 00:19:34.660 | .command_transient_transport_error' 00:19:34.660 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:34.660 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:34.917 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 156 > 0 )) 00:19:34.918 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93623 00:19:34.918 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93623 ']' 00:19:34.918 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93623 00:19:34.918 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:34.918 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:34.918 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93623 00:19:34.918 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:34.918 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:34.918 killing process with pid 93623 00:19:34.918 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93623' 00:19:34.918 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93623 00:19:34.918 Received shutdown signal, test time was about 2.000000 seconds 00:19:34.918 00:19:34.918 Latency(us) 00:19:34.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.918 =================================================================================================================== 00:19:34.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:34.918 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93623 00:19:35.175 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:19:35.175 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:19:35.175 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:19:35.175 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:19:35.175 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:19:35.175 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93713 00:19:35.175 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:19:35.175 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93713 /var/tmp/bperf.sock 00:19:35.175 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 93713 ']' 00:19:35.175 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:35.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:35.175 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:35.175 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:35.175 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:35.175 20:35:56 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:35.175 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:35.175 Zero copy mechanism will not be used. 00:19:35.175 [2024-07-15 20:35:56.641384] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:35.175 [2024-07-15 20:35:56.641498] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93713 ] 00:19:35.432 [2024-07-15 20:35:56.784413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.432 [2024-07-15 20:35:56.844635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.360 20:35:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.360 20:35:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:19:36.360 20:35:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:36.360 20:35:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:19:36.617 20:35:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:19:36.617 20:35:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.617 20:35:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:36.617 20:35:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.617 20:35:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:36.617 20:35:57 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:36.874 nvme0n1 00:19:36.874 20:35:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:19:36.874 20:35:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.874 20:35:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:36.874 20:35:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.874 20:35:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:19:36.874 20:35:58 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:37.151 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:37.151 Zero copy mechanism will not be used. 00:19:37.151 Running I/O for 2 seconds... 00:19:37.151 [2024-07-15 20:35:58.412972] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.151 [2024-07-15 20:35:58.413287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.151 [2024-07-15 20:35:58.413319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.151 [2024-07-15 20:35:58.418250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.151 [2024-07-15 20:35:58.418550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.151 [2024-07-15 20:35:58.418590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.151 [2024-07-15 20:35:58.423512] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.151 [2024-07-15 20:35:58.423810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.151 [2024-07-15 20:35:58.423851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.151 [2024-07-15 20:35:58.428798] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.151 [2024-07-15 20:35:58.429109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.151 [2024-07-15 20:35:58.429149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.151 [2024-07-15 20:35:58.434067] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.151 [2024-07-15 20:35:58.434373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.151 [2024-07-15 20:35:58.434412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.151 [2024-07-15 20:35:58.439388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.151 [2024-07-15 20:35:58.439690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.151 [2024-07-15 20:35:58.439729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.151 [2024-07-15 20:35:58.444733] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.151 [2024-07-15 20:35:58.445052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.151 [2024-07-15 20:35:58.445097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.151 [2024-07-15 20:35:58.450050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.151 [2024-07-15 20:35:58.450358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.151 [2024-07-15 20:35:58.450397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.151 [2024-07-15 20:35:58.455412] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.151 [2024-07-15 20:35:58.455716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.151 [2024-07-15 20:35:58.455755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.151 [2024-07-15 20:35:58.460761] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.151 [2024-07-15 20:35:58.461073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.151 [2024-07-15 20:35:58.461112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.151 [2024-07-15 20:35:58.466097] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.151 [2024-07-15 20:35:58.466400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.151 [2024-07-15 20:35:58.466444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.151 [2024-07-15 20:35:58.471389] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.151 [2024-07-15 20:35:58.471688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.151 [2024-07-15 20:35:58.471729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.151 [2024-07-15 20:35:58.476721] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.151 [2024-07-15 20:35:58.477032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.151 [2024-07-15 20:35:58.477071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.151 [2024-07-15 20:35:58.482009] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.151 [2024-07-15 20:35:58.482321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.151 [2024-07-15 20:35:58.482359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.151 [2024-07-15 20:35:58.487462] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.151 [2024-07-15 20:35:58.487757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.151 [2024-07-15 20:35:58.487795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.151 [2024-07-15 20:35:58.492736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.151 [2024-07-15 20:35:58.493051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.151 [2024-07-15 20:35:58.493089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.151 [2024-07-15 20:35:58.498012] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.151 [2024-07-15 20:35:58.498313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.151 [2024-07-15 20:35:58.498352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.151 [2024-07-15 20:35:58.503302] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.151 [2024-07-15 20:35:58.503604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.151 [2024-07-15 20:35:58.503641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.508613] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.508934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.508972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.513957] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.514261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.514299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.519357] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.519653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.519690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.524647] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.524995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.525033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.530041] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.530353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.530391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.535335] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.535633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.535671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.540582] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.540908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.540946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.545915] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.546231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.546269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.551207] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.551504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.551542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.556514] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.556826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.556865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.561816] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.562129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.562167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.567078] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.567376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.567412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.572365] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.572661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.572712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.577636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.577954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.577992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.582931] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.583226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.583264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.588219] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.588520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.588557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.593495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.593792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.593830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.598766] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.599086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.599123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.604036] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.604334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.604370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.609308] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.609605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.609643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.614613] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.614942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.614977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.619969] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.620268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.620306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.625312] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.625613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.625651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.630783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.152 [2024-07-15 20:35:58.631098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.152 [2024-07-15 20:35:58.631131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.152 [2024-07-15 20:35:58.636101] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.434 [2024-07-15 20:35:58.636409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.434 [2024-07-15 20:35:58.636440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.434 [2024-07-15 20:35:58.641367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.434 [2024-07-15 20:35:58.641683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.434 [2024-07-15 20:35:58.641715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.434 [2024-07-15 20:35:58.646672] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.434 [2024-07-15 20:35:58.646996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.434 [2024-07-15 20:35:58.647026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.434 [2024-07-15 20:35:58.652085] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.434 [2024-07-15 20:35:58.652391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.434 [2024-07-15 20:35:58.652421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.434 [2024-07-15 20:35:58.657415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.434 [2024-07-15 20:35:58.657728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.434 [2024-07-15 20:35:58.657766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.662775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.663092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.663125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.668126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.668426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.668457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.673445] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.673740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.673770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.678776] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.679088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.679114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.684096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.684391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.684422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.689420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.689716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.689748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.694663] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.694976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.695002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.699999] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.700301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.700331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.705267] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.705563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.705594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.710558] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.710888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.710918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.715845] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.716159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.716190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.721122] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.721420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.721451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.726443] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.726739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.726778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.731771] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.732086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.732117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.737172] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.737469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.737501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.742883] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.743181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.743212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.748162] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.748472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.748504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.753617] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.753950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.753984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.759079] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.759387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.759418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.764436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.764771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.764802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.769909] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.770208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.770238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.775299] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.775596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.775627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.780682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.781018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.781049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.786031] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.786330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.786360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.791343] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.791637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.791668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.796716] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.797033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.797076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.802082] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.802380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.802418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.807383] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.807684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.807715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.435 [2024-07-15 20:35:58.812671] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.435 [2024-07-15 20:35:58.813013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.435 [2024-07-15 20:35:58.813043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.818022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.818322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.818353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.823450] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.823767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.823798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.828778] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.829101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.829131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.834160] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.834460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.834491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.839482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.839775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.839806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.844787] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.845113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.845144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.850178] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.850485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.850515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.855478] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.855781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.855812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.860838] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.861153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.861186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.866162] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.866459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.866489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.871450] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.871745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.871776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.876737] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.877063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.877094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.882058] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.882361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.882392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.887384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.887678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.887708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.892662] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.892987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.893024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.898000] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.898296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.898326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.903274] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.903572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.903603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.908615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.908940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.908972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.913973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.914271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.914302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.919255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.919557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.919589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.924543] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.924883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.924914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.436 [2024-07-15 20:35:58.930006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.436 [2024-07-15 20:35:58.930293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.436 [2024-07-15 20:35:58.930327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.695 [2024-07-15 20:35:58.935138] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.695 [2024-07-15 20:35:58.935416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.695 [2024-07-15 20:35:58.935448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.695 [2024-07-15 20:35:58.940258] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.695 [2024-07-15 20:35:58.940535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.695 [2024-07-15 20:35:58.940580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.695 [2024-07-15 20:35:58.945429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.695 [2024-07-15 20:35:58.945713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.695 [2024-07-15 20:35:58.945745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.695 [2024-07-15 20:35:58.950560] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.695 [2024-07-15 20:35:58.950848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.695 [2024-07-15 20:35:58.950898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.695 [2024-07-15 20:35:58.955714] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.695 [2024-07-15 20:35:58.956005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.695 [2024-07-15 20:35:58.956036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.695 [2024-07-15 20:35:58.960768] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.695 [2024-07-15 20:35:58.961066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.695 [2024-07-15 20:35:58.961098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.695 [2024-07-15 20:35:58.965843] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.695 [2024-07-15 20:35:58.966139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.695 [2024-07-15 20:35:58.966170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.695 [2024-07-15 20:35:58.970960] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.695 [2024-07-15 20:35:58.971236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.695 [2024-07-15 20:35:58.971266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:58.976053] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:58.976340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:58.976373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:58.981158] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:58.981438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:58.981475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:58.986300] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:58.986584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:58.986616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:58.991393] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:58.991676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:58.991715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:58.996585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:58.996933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:58.996972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.001915] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.002224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.002263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.007141] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.007447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.007484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.012361] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.012666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.012726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.017515] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.017826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.017864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.022759] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.023082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.023118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.027931] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.028232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.028270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.033110] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.033420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.033452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.038260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.038563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.038613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.043392] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.043669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.043701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.048495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.048779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.048804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.053648] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.053948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.053979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.058694] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.058986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.059016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.063778] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.064067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.064097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.068914] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.069188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.069218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.074022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.074301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.074331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.079149] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.079427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.079453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.084257] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.084558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.084593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.089429] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.089706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.089737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.094486] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.094766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.094796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.099579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.099856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.099900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.104652] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.104953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.104983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.109712] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.110005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.110034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.696 [2024-07-15 20:35:59.114793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.696 [2024-07-15 20:35:59.115086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.696 [2024-07-15 20:35:59.115116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.697 [2024-07-15 20:35:59.119863] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.697 [2024-07-15 20:35:59.120160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.697 [2024-07-15 20:35:59.120190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.697 [2024-07-15 20:35:59.125028] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.697 [2024-07-15 20:35:59.125306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.697 [2024-07-15 20:35:59.125335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.697 [2024-07-15 20:35:59.130125] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.697 [2024-07-15 20:35:59.130408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.697 [2024-07-15 20:35:59.130438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.697 [2024-07-15 20:35:59.135190] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.697 [2024-07-15 20:35:59.135466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.697 [2024-07-15 20:35:59.135497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.697 [2024-07-15 20:35:59.140222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.697 [2024-07-15 20:35:59.140499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.697 [2024-07-15 20:35:59.140529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.697 [2024-07-15 20:35:59.145337] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.697 [2024-07-15 20:35:59.145611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.697 [2024-07-15 20:35:59.145641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.697 [2024-07-15 20:35:59.150436] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.697 [2024-07-15 20:35:59.150710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.697 [2024-07-15 20:35:59.150741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.697 [2024-07-15 20:35:59.155511] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.697 [2024-07-15 20:35:59.155789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.697 [2024-07-15 20:35:59.155820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.697 [2024-07-15 20:35:59.160603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.697 [2024-07-15 20:35:59.160908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.697 [2024-07-15 20:35:59.160938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.697 [2024-07-15 20:35:59.165729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.697 [2024-07-15 20:35:59.166022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.697 [2024-07-15 20:35:59.166051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.697 [2024-07-15 20:35:59.170811] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.697 [2024-07-15 20:35:59.171099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.697 [2024-07-15 20:35:59.171129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.697 [2024-07-15 20:35:59.175805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.697 [2024-07-15 20:35:59.176100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.697 [2024-07-15 20:35:59.176129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.697 [2024-07-15 20:35:59.180930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.697 [2024-07-15 20:35:59.181216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.697 [2024-07-15 20:35:59.181247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.697 [2024-07-15 20:35:59.186030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.697 [2024-07-15 20:35:59.186306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.697 [2024-07-15 20:35:59.186337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.697 [2024-07-15 20:35:59.191101] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.697 [2024-07-15 20:35:59.191377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.697 [2024-07-15 20:35:59.191406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.956 [2024-07-15 20:35:59.196215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.956 [2024-07-15 20:35:59.196490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.956 [2024-07-15 20:35:59.196520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.956 [2024-07-15 20:35:59.201322] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.956 [2024-07-15 20:35:59.201598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.956 [2024-07-15 20:35:59.201627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.956 [2024-07-15 20:35:59.206399] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.956 [2024-07-15 20:35:59.206672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.956 [2024-07-15 20:35:59.206702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.956 [2024-07-15 20:35:59.211471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.956 [2024-07-15 20:35:59.211748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.956 [2024-07-15 20:35:59.211777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.956 [2024-07-15 20:35:59.216496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.956 [2024-07-15 20:35:59.216793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.956 [2024-07-15 20:35:59.216823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.956 [2024-07-15 20:35:59.221576] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.956 [2024-07-15 20:35:59.221851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.956 [2024-07-15 20:35:59.221893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.956 [2024-07-15 20:35:59.226672] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.956 [2024-07-15 20:35:59.226963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.956 [2024-07-15 20:35:59.226994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.956 [2024-07-15 20:35:59.231739] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.956 [2024-07-15 20:35:59.232028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.956 [2024-07-15 20:35:59.232057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.956 [2024-07-15 20:35:59.236820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.956 [2024-07-15 20:35:59.237113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.956 [2024-07-15 20:35:59.237144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.956 [2024-07-15 20:35:59.241959] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.956 [2024-07-15 20:35:59.242235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.956 [2024-07-15 20:35:59.242265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.956 [2024-07-15 20:35:59.247080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.247356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.247399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.252216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.252496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.252527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.257329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.257614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.257644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.262520] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.262796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.262826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.267643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.267931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.267961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.272730] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.273023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.273052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.277806] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.278097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.278128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.282862] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.283152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.283183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.288093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.288375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.288404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.293198] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.293476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.293508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.298249] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.298529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.298560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.303365] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.303649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.303681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.308420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.308708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.308739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.313474] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.313752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.313786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.318573] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.318882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.318918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.323751] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.324094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.324131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.329059] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.329371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.329409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.334240] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.334545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.334581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.339477] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.339780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.339816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.344673] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.345013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.345051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.349833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.350151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.350186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.354987] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.355299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.355337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.360168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.360478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.360518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.365396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.365709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.365744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.370524] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.370860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.370929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.375732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.376098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.376148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.381022] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.381299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.381330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.386072] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.386349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.386380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.957 [2024-07-15 20:35:59.391174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.957 [2024-07-15 20:35:59.391453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.957 [2024-07-15 20:35:59.391482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.958 [2024-07-15 20:35:59.396250] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.958 [2024-07-15 20:35:59.396528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.958 [2024-07-15 20:35:59.396558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.958 [2024-07-15 20:35:59.401289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.958 [2024-07-15 20:35:59.401565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.958 [2024-07-15 20:35:59.401595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.958 [2024-07-15 20:35:59.406333] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.958 [2024-07-15 20:35:59.406611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.958 [2024-07-15 20:35:59.406642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.958 [2024-07-15 20:35:59.411366] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.958 [2024-07-15 20:35:59.411647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.958 [2024-07-15 20:35:59.411678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.958 [2024-07-15 20:35:59.416390] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.958 [2024-07-15 20:35:59.416677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.958 [2024-07-15 20:35:59.416718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.958 [2024-07-15 20:35:59.421499] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.958 [2024-07-15 20:35:59.421777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.958 [2024-07-15 20:35:59.421816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.958 [2024-07-15 20:35:59.426615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.958 [2024-07-15 20:35:59.426924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.958 [2024-07-15 20:35:59.426962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.958 [2024-07-15 20:35:59.431734] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.958 [2024-07-15 20:35:59.432037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.958 [2024-07-15 20:35:59.432075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:37.958 [2024-07-15 20:35:59.436941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.958 [2024-07-15 20:35:59.437219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.958 [2024-07-15 20:35:59.437250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.958 [2024-07-15 20:35:59.442047] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.958 [2024-07-15 20:35:59.442324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.958 [2024-07-15 20:35:59.442355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:37.958 [2024-07-15 20:35:59.447187] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.958 [2024-07-15 20:35:59.447467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.958 [2024-07-15 20:35:59.447494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:37.958 [2024-07-15 20:35:59.452359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:37.958 [2024-07-15 20:35:59.452634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:37.958 [2024-07-15 20:35:59.452672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.217 [2024-07-15 20:35:59.457502] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.217 [2024-07-15 20:35:59.457781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.217 [2024-07-15 20:35:59.457811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.217 [2024-07-15 20:35:59.462678] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.217 [2024-07-15 20:35:59.462973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.217 [2024-07-15 20:35:59.463005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.217 [2024-07-15 20:35:59.467814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.217 [2024-07-15 20:35:59.468113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.217 [2024-07-15 20:35:59.468144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.217 [2024-07-15 20:35:59.472957] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.217 [2024-07-15 20:35:59.473248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.217 [2024-07-15 20:35:59.473281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.217 [2024-07-15 20:35:59.478179] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.217 [2024-07-15 20:35:59.478457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.217 [2024-07-15 20:35:59.478487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.217 [2024-07-15 20:35:59.483306] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.217 [2024-07-15 20:35:59.483582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.217 [2024-07-15 20:35:59.483612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.217 [2024-07-15 20:35:59.488406] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.217 [2024-07-15 20:35:59.488699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.217 [2024-07-15 20:35:59.488730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.217 [2024-07-15 20:35:59.493585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.217 [2024-07-15 20:35:59.493862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.217 [2024-07-15 20:35:59.493908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.217 [2024-07-15 20:35:59.498720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.217 [2024-07-15 20:35:59.499011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.217 [2024-07-15 20:35:59.499041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.217 [2024-07-15 20:35:59.503853] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.217 [2024-07-15 20:35:59.504142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.217 [2024-07-15 20:35:59.504166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.217 [2024-07-15 20:35:59.508985] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.217 [2024-07-15 20:35:59.509268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.217 [2024-07-15 20:35:59.509298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.217 [2024-07-15 20:35:59.514157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.217 [2024-07-15 20:35:59.514462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.217 [2024-07-15 20:35:59.514499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.217 [2024-07-15 20:35:59.519402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.217 [2024-07-15 20:35:59.519711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.217 [2024-07-15 20:35:59.519748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.217 [2024-07-15 20:35:59.525649] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.217 [2024-07-15 20:35:59.525973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.217 [2024-07-15 20:35:59.526010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.217 [2024-07-15 20:35:59.531656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.217 [2024-07-15 20:35:59.531980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.217 [2024-07-15 20:35:59.532018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.217 [2024-07-15 20:35:59.537611] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.217 [2024-07-15 20:35:59.537931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.217 [2024-07-15 20:35:59.537968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.217 [2024-07-15 20:35:59.543591] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.217 [2024-07-15 20:35:59.543930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.217 [2024-07-15 20:35:59.543972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.217 [2024-07-15 20:35:59.549691] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.217 [2024-07-15 20:35:59.550041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.550078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.555704] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.556034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.556065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.561674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.561978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.562003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.567454] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.567730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.567760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.573213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.573496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.573528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.579019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.579290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.579320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.584828] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.585125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.585156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.590674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.590955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.590986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.595722] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.595989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.596014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.600318] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.600562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.600593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.604930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.605168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.605198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.609628] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.609857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.609899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.614177] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.614410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.614434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.618788] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.619036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.619066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.623421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.623654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.623685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.628118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.628354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.628384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.632777] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.633025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.633050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.637417] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.637649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.637674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.642066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.642297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.642328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.646651] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.646900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.646930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.651291] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.651524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.651554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.655834] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.656082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.656108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.660438] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.660672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.660707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.665045] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.665277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.665308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.669622] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.669857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.669902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.674222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.674455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.674486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.678849] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.679104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.679133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.683579] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.683811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.683841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.688239] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.688478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.688508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.218 [2024-07-15 20:35:59.692930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.218 [2024-07-15 20:35:59.693164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.218 [2024-07-15 20:35:59.693194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.219 [2024-07-15 20:35:59.697506] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.219 [2024-07-15 20:35:59.697745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.219 [2024-07-15 20:35:59.697776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.219 [2024-07-15 20:35:59.702188] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.219 [2024-07-15 20:35:59.702422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.219 [2024-07-15 20:35:59.702452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.219 [2024-07-15 20:35:59.706807] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.219 [2024-07-15 20:35:59.707054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.219 [2024-07-15 20:35:59.707084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.219 [2024-07-15 20:35:59.711464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.219 [2024-07-15 20:35:59.711698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.219 [2024-07-15 20:35:59.711728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.478 [2024-07-15 20:35:59.716155] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.478 [2024-07-15 20:35:59.716389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.478 [2024-07-15 20:35:59.716419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.478 [2024-07-15 20:35:59.720760] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.478 [2024-07-15 20:35:59.721011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.478 [2024-07-15 20:35:59.721036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.478 [2024-07-15 20:35:59.725378] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.478 [2024-07-15 20:35:59.725613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.478 [2024-07-15 20:35:59.725644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.478 [2024-07-15 20:35:59.729968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.478 [2024-07-15 20:35:59.730200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.478 [2024-07-15 20:35:59.730229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.478 [2024-07-15 20:35:59.734567] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.478 [2024-07-15 20:35:59.734800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.478 [2024-07-15 20:35:59.734830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.478 [2024-07-15 20:35:59.739175] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.478 [2024-07-15 20:35:59.739407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.739431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.743794] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.744042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.744067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.748354] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.748586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.748617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.753030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.753264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.753294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.757636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.757886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.757915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.762254] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.762486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.762510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.766851] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.767099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.767130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.771532] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.771769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.771799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.776091] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.776324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.776355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.780672] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.780933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.780958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.785298] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.785536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.785568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.789919] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.790158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.790196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.794687] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.794938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.794963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.799353] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.799585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.799616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.804099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.804335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.804366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.808809] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.809065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.809095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.813559] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.813795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.813826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.818332] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.818567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.818591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.823229] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.823466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.823496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.827996] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.828231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.828260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.832809] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.833081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.833112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.837621] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.837853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.837897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.842545] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.842778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.842809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.847386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.847643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.847674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.852213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.852446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.852478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.856976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.857209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.857246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.861736] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.861987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.862017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.866441] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.866681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.866712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.871197] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.871441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.479 [2024-07-15 20:35:59.871472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.479 [2024-07-15 20:35:59.875970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.479 [2024-07-15 20:35:59.876203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.876234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.880679] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.880958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.880983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.885415] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.885649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.885688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.890123] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.890364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.890396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.894971] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.895205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.895235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.899700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.899954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.899979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.904479] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.904723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.904748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.909214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.909448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.909480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.913936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.914179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.914210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.918652] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.918910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.918940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.923394] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.923627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.923660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.928137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.928373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.928404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.932930] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.933166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.933196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.937656] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.937904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.937929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.942325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.942561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.942584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.947015] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.947250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.947274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.951674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.951925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.951949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.956339] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.956574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.956604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.961074] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.961312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.961352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.965826] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.966077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.966107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.970546] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.970781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.970821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.480 [2024-07-15 20:35:59.975206] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.480 [2024-07-15 20:35:59.975459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.480 [2024-07-15 20:35:59.975490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:35:59.979926] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:35:59.980156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:35:59.980185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:35:59.984495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:35:59.984741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:35:59.984766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:35:59.989281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:35:59.989513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:35:59.989537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:35:59.994067] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:35:59.994321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:35:59.994352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:35:59.998840] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:35:59.999106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:35:59.999137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:36:00.003663] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:36:00.003941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:36:00.003979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:36:00.008386] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:36:00.008633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:36:00.008667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:36:00.013344] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:36:00.013583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:36:00.013614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:36:00.018167] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:36:00.018406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:36:00.018437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:36:00.022978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:36:00.023231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:36:00.023265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:36:00.027711] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:36:00.027972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:36:00.028005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:36:00.032461] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:36:00.032725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:36:00.032759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:36:00.037283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:36:00.037548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:36:00.037587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:36:00.042127] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:36:00.042369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:36:00.042403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:36:00.046932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:36:00.047169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:36:00.047199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:36:00.051670] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:36:00.051925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:36:00.051958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:36:00.056485] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:36:00.056733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:36:00.056766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:36:00.061216] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:36:00.061479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:36:00.061529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:36:00.065974] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:36:00.066249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:36:00.066291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:36:00.070769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:36:00.071022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:36:00.071056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:36:00.075541] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:36:00.075774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:36:00.075805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:36:00.080321] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:36:00.080564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:36:00.080594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.740 [2024-07-15 20:36:00.085074] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.740 [2024-07-15 20:36:00.085309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.740 [2024-07-15 20:36:00.085340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.089750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.090003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.090028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.094534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.094770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.094794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.099301] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.099541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.099573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.104161] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.104405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.104435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.108935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.109170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.109200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.113777] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.114031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.114063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.118582] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.118817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.118848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.123547] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.123787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.123819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.128393] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.128626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.128657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.133325] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.133582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.133619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.138218] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.138453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.138480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.142978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.143215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.143245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.147708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.147970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.148000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.152467] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.152713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.152753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.157303] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.157553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.157579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.162030] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.162275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.162305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.166707] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.166958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.166988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.171379] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.171627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.171658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.176139] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.176373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.176397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.180862] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.181115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.181145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.185627] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.185862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.185903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.190470] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.190718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.190750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.195262] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.195505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.195547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.200036] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.200271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.200296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.204775] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.205037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.205063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.209602] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.209848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.209893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.214481] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.214723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.214754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.219278] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.219519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.741 [2024-07-15 20:36:00.219550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:38.741 [2024-07-15 20:36:00.224018] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.741 [2024-07-15 20:36:00.224257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.742 [2024-07-15 20:36:00.224282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:38.742 [2024-07-15 20:36:00.228793] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.742 [2024-07-15 20:36:00.229047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.742 [2024-07-15 20:36:00.229073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:38.742 [2024-07-15 20:36:00.233668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:38.742 [2024-07-15 20:36:00.233921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:38.742 [2024-07-15 20:36:00.233952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.238409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.238675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.238707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.243265] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.243531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.243563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.248087] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.248326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.248357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.252863] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.253119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.253149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.257618] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.257854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.257900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.262316] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.262550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.262581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.267066] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.267302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.267332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.271729] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.271975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.272010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.276475] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.276723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.276748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.281205] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.281438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.281462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.286944] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.287183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.287207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.291633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.291890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.291920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.296437] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.296673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.296717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.301174] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.301416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.301447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.305901] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.306142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.306174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.310569] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.310807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.310838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.315328] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.315559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.315590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.320011] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.320247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.320288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.324735] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.324988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.325019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.329468] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.329701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.329733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.334194] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.334430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.334460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:39.002 [2024-07-15 20:36:00.338896] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.002 [2024-07-15 20:36:00.339133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.002 [2024-07-15 20:36:00.339163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.003 [2024-07-15 20:36:00.343589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.003 [2024-07-15 20:36:00.343832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.003 [2024-07-15 20:36:00.343863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.003 [2024-07-15 20:36:00.348290] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.003 [2024-07-15 20:36:00.348526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.003 [2024-07-15 20:36:00.348557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:39.003 [2024-07-15 20:36:00.352970] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.003 [2024-07-15 20:36:00.353210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.003 [2024-07-15 20:36:00.353241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:39.003 [2024-07-15 20:36:00.357700] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.003 [2024-07-15 20:36:00.357958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.003 [2024-07-15 20:36:00.357988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.003 [2024-07-15 20:36:00.362363] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.003 [2024-07-15 20:36:00.362598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.003 [2024-07-15 20:36:00.362629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.003 [2024-07-15 20:36:00.367055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.003 [2024-07-15 20:36:00.367291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.003 [2024-07-15 20:36:00.367323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:39.003 [2024-07-15 20:36:00.371688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.003 [2024-07-15 20:36:00.371939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.003 [2024-07-15 20:36:00.371981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:39.003 [2024-07-15 20:36:00.376444] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.003 [2024-07-15 20:36:00.376677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.003 [2024-07-15 20:36:00.376718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.003 [2024-07-15 20:36:00.381125] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.003 [2024-07-15 20:36:00.381360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.003 [2024-07-15 20:36:00.381391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.003 [2024-07-15 20:36:00.385797] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.003 [2024-07-15 20:36:00.386051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.003 [2024-07-15 20:36:00.386081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:39.003 [2024-07-15 20:36:00.390464] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.003 [2024-07-15 20:36:00.390697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.003 [2024-07-15 20:36:00.390728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:39.003 [2024-07-15 20:36:00.395141] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.003 [2024-07-15 20:36:00.395376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.003 [2024-07-15 20:36:00.395406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.003 [2024-07-15 20:36:00.399898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b39a20) with pdu=0x2000190fef90 00:19:39.003 [2024-07-15 20:36:00.400133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.003 [2024-07-15 20:36:00.400163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.003 00:19:39.003 Latency(us) 00:19:39.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.003 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:19:39.003 nvme0n1 : 2.00 6138.76 767.34 0.00 0.00 2600.40 2129.92 10962.39 00:19:39.003 =================================================================================================================== 00:19:39.003 Total : 6138.76 767.34 0.00 0.00 2600.40 2129.92 10962.39 00:19:39.003 0 00:19:39.003 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:19:39.003 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:19:39.003 | .driver_specific 00:19:39.003 | .nvme_error 00:19:39.003 | .status_code 00:19:39.003 | .command_transient_transport_error' 00:19:39.003 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:19:39.003 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:19:39.273 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 396 > 0 )) 00:19:39.273 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93713 00:19:39.273 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93713 ']' 00:19:39.273 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93713 00:19:39.273 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:39.273 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:39.273 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93713 00:19:39.273 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:39.273 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:39.273 killing process with pid 93713 00:19:39.273 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93713' 00:19:39.273 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93713 00:19:39.273 Received shutdown signal, test time was about 2.000000 seconds 00:19:39.273 00:19:39.273 Latency(us) 00:19:39.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.273 =================================================================================================================== 00:19:39.273 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:39.273 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93713 00:19:39.532 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93417 00:19:39.532 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 93417 ']' 00:19:39.532 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 93417 00:19:39.532 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:19:39.532 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:39.532 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 93417 00:19:39.532 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:39.532 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:39.532 killing process with pid 93417 00:19:39.532 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 93417' 00:19:39.532 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 93417 00:19:39.532 20:36:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 93417 00:19:39.791 00:19:39.791 real 0m18.267s 00:19:39.791 user 0m36.503s 00:19:39.791 sys 0m4.324s 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:19:39.791 ************************************ 00:19:39.791 END TEST nvmf_digest_error 00:19:39.791 ************************************ 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:39.791 rmmod nvme_tcp 00:19:39.791 rmmod nvme_fabrics 00:19:39.791 rmmod nvme_keyring 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 93417 ']' 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 93417 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 93417 ']' 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 93417 00:19:39.791 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (93417) - No such process 00:19:39.791 Process with pid 93417 is not found 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 93417 is not found' 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:39.791 00:19:39.791 real 0m37.149s 00:19:39.791 user 1m12.064s 00:19:39.791 sys 0m8.974s 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:39.791 20:36:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:39.791 ************************************ 00:19:39.791 END TEST nvmf_digest 00:19:39.791 ************************************ 00:19:40.049 20:36:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:40.049 20:36:01 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 1 -eq 1 ]] 00:19:40.050 20:36:01 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ tcp == \t\c\p ]] 00:19:40.050 20:36:01 nvmf_tcp -- nvmf/nvmf.sh@113 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:19:40.050 20:36:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:40.050 20:36:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:40.050 20:36:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:40.050 ************************************ 00:19:40.050 START TEST nvmf_mdns_discovery 00:19:40.050 ************************************ 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:19:40.050 * Looking for test storage... 00:19:40.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:40.050 Cannot find device "nvmf_tgt_br" 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:40.050 Cannot find device "nvmf_tgt_br2" 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:40.050 Cannot find device "nvmf_tgt_br" 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:40.050 Cannot find device "nvmf_tgt_br2" 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:40.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:40.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:19:40.050 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:40.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:19:40.307 00:19:40.307 --- 10.0.0.2 ping statistics --- 00:19:40.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.307 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:40.307 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:40.307 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:19:40.307 00:19:40.307 --- 10.0.0.3 ping statistics --- 00:19:40.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.307 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:40.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:19:40.307 00:19:40.307 --- 10.0.0.1 ping statistics --- 00:19:40.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.307 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=94010 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 94010 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94010 ']' 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.307 20:36:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.564 [2024-07-15 20:36:01.858039] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:40.564 [2024-07-15 20:36:01.858135] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.564 [2024-07-15 20:36:01.999970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.822 [2024-07-15 20:36:02.070470] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.822 [2024-07-15 20:36:02.070524] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.822 [2024-07-15 20:36:02.070537] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.822 [2024-07-15 20:36:02.070547] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.822 [2024-07-15 20:36:02.070555] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.822 [2024-07-15 20:36:02.070589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.417 20:36:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.417 20:36:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:19:41.417 20:36:02 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:41.417 20:36:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:41.417 20:36:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.675 20:36:02 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.675 20:36:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:19:41.675 20:36:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.675 20:36:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.675 20:36:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.675 20:36:02 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:19:41.675 20:36:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.675 20:36:02 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.675 [2024-07-15 20:36:03.006409] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.675 [2024-07-15 20:36:03.014441] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.675 null0 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.675 null1 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.675 null2 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.675 null3 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=94066 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 94066 /tmp/host.sock 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@829 -- # '[' -z 94066 ']' 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.675 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.675 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.675 [2024-07-15 20:36:03.116518] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:19:41.675 [2024-07-15 20:36:03.116619] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94066 ] 00:19:41.934 [2024-07-15 20:36:03.252368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.934 [2024-07-15 20:36:03.310823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.934 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.934 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@862 -- # return 0 00:19:41.934 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:19:41.934 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:19:41.934 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:19:42.193 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=94076 00:19:42.193 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:19:42.193 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:19:42.193 20:36:03 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:19:42.193 Process 979 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:19:42.193 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:19:42.193 Successfully dropped root privileges. 00:19:42.193 avahi-daemon 0.8 starting up. 00:19:42.193 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:19:42.193 Successfully called chroot(). 00:19:42.193 Successfully dropped remaining capabilities. 00:19:42.193 No service file found in /etc/avahi/services. 00:19:43.124 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:19:43.124 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:19:43.124 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:19:43.124 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:19:43.124 Network interface enumeration completed. 00:19:43.124 Registering new address record for fe80::80e1:99ff:fe10:6dcd on nvmf_tgt_if2.*. 00:19:43.124 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:19:43.124 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:19:43.124 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:19:43.124 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 761250284. 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:43.124 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.382 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:19:43.382 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:43.382 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.382 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.382 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.382 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:19:43.382 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:43.382 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.382 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.382 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:43.382 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:43.382 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:43.382 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.382 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:19:43.382 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:19:43.382 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.382 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:43.382 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.382 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.383 [2024-07-15 20:36:04.810807] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.383 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.640 [2024-07-15 20:36:04.886983] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.641 [2024-07-15 20:36:04.926884] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.641 [2024-07-15 20:36:04.934894] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.641 20:36:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:19:44.600 [2024-07-15 20:36:05.710795] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:19:44.858 [2024-07-15 20:36:06.310804] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:44.858 [2024-07-15 20:36:06.310844] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:44.858 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:44.858 cookie is 0 00:19:44.858 is_local: 1 00:19:44.858 our_own: 0 00:19:44.858 wide_area: 0 00:19:44.858 multicast: 1 00:19:44.858 cached: 1 00:19:45.115 [2024-07-15 20:36:06.410794] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:45.115 [2024-07-15 20:36:06.410831] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:45.115 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:45.115 cookie is 0 00:19:45.115 is_local: 1 00:19:45.115 our_own: 0 00:19:45.116 wide_area: 0 00:19:45.116 multicast: 1 00:19:45.116 cached: 1 00:19:45.116 [2024-07-15 20:36:06.410845] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:19:45.116 [2024-07-15 20:36:06.510802] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:45.116 [2024-07-15 20:36:06.510851] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:45.116 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:45.116 cookie is 0 00:19:45.116 is_local: 1 00:19:45.116 our_own: 0 00:19:45.116 wide_area: 0 00:19:45.116 multicast: 1 00:19:45.116 cached: 1 00:19:45.116 [2024-07-15 20:36:06.610789] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:45.116 [2024-07-15 20:36:06.610819] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:45.116 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:45.116 cookie is 0 00:19:45.116 is_local: 1 00:19:45.116 our_own: 0 00:19:45.116 wide_area: 0 00:19:45.116 multicast: 1 00:19:45.116 cached: 1 00:19:45.116 [2024-07-15 20:36:06.610833] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:19:46.049 [2024-07-15 20:36:07.321896] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:46.049 [2024-07-15 20:36:07.321939] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:46.049 [2024-07-15 20:36:07.321959] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:46.049 [2024-07-15 20:36:07.408076] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:19:46.049 [2024-07-15 20:36:07.464995] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:19:46.049 [2024-07-15 20:36:07.465031] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:19:46.049 [2024-07-15 20:36:07.521589] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:46.049 [2024-07-15 20:36:07.521619] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:46.049 [2024-07-15 20:36:07.521638] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:46.307 [2024-07-15 20:36:07.607716] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:19:46.307 [2024-07-15 20:36:07.663814] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:19:46.307 [2024-07-15 20:36:07.663850] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:48.837 20:36:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:19:48.837 20:36:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:19:48.837 20:36:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:19:48.837 20:36:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.837 20:36:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.837 20:36:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:19:48.837 20:36:09 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:19:48.837 20:36:09 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.837 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:19:48.837 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:19:48.837 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:48.837 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:19:48.837 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:19:48.837 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.837 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.838 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.096 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:19:49.096 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:19:49.096 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:19:49.096 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:49.096 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.096 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:49.096 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.096 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:19:49.096 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.096 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:49.096 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.096 20:36:10 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:19:50.031 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:19:50.031 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:50.031 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:50.031 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.031 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:50.031 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.031 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:50.032 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.032 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:50.032 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:19:50.032 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:50.032 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.032 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.032 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:50.032 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.032 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:19:50.032 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:19:50.032 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:19:50.032 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:19:50.032 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.032 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.032 [2024-07-15 20:36:11.484445] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:50.032 [2024-07-15 20:36:11.485385] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:50.032 [2024-07-15 20:36:11.485427] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:50.032 [2024-07-15 20:36:11.485467] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:50.032 [2024-07-15 20:36:11.485482] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:50.032 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.032 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:19:50.032 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.032 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.032 [2024-07-15 20:36:11.492355] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:50.032 [2024-07-15 20:36:11.493371] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:50.032 [2024-07-15 20:36:11.493431] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:50.032 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.032 20:36:11 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:19:50.290 [2024-07-15 20:36:11.624495] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:19:50.290 [2024-07-15 20:36:11.624727] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:19:50.290 [2024-07-15 20:36:11.685799] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:19:50.290 [2024-07-15 20:36:11.685833] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:19:50.290 [2024-07-15 20:36:11.685841] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:50.290 [2024-07-15 20:36:11.685862] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:50.290 [2024-07-15 20:36:11.685935] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:19:50.290 [2024-07-15 20:36:11.685946] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:50.290 [2024-07-15 20:36:11.685952] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:50.290 [2024-07-15 20:36:11.685967] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:50.290 [2024-07-15 20:36:11.731611] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:19:50.290 [2024-07-15 20:36:11.731650] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:50.290 [2024-07-15 20:36:11.731697] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:50.291 [2024-07-15 20:36:11.731707] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:51.226 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.488 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:51.488 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:19:51.488 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:19:51.488 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:51.488 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.488 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:51.488 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.488 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:19:51.488 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:19:51.488 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:19:51.488 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:51.488 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.488 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:51.488 [2024-07-15 20:36:12.805298] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:51.488 [2024-07-15 20:36:12.805336] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:51.488 [2024-07-15 20:36:12.805373] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:51.488 [2024-07-15 20:36:12.805387] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:51.488 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.488 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:19:51.488 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.488 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:51.488 [2024-07-15 20:36:12.813288] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:51.488 [2024-07-15 20:36:12.813346] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:19:51.488 [2024-07-15 20:36:12.814492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.488 [2024-07-15 20:36:12.814527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.488 [2024-07-15 20:36:12.814541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.488 [2024-07-15 20:36:12.814551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.488 [2024-07-15 20:36:12.814561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.488 [2024-07-15 20:36:12.814570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.488 [2024-07-15 20:36:12.814580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.488 [2024-07-15 20:36:12.814590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.488 [2024-07-15 20:36:12.814599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb380 is same with the state(5) to be set 00:19:51.488 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.488 20:36:12 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:19:51.488 [2024-07-15 20:36:12.822474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.488 [2024-07-15 20:36:12.822503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.488 [2024-07-15 20:36:12.822515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.488 [2024-07-15 20:36:12.822525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.488 [2024-07-15 20:36:12.822535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.488 [2024-07-15 20:36:12.822545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.488 [2024-07-15 20:36:12.822554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:51.488 [2024-07-15 20:36:12.822564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:51.488 [2024-07-15 20:36:12.822573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a44b0 is same with the state(5) to be set 00:19:51.488 [2024-07-15 20:36:12.824451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7eb380 (9): Bad file descriptor 00:19:51.488 [2024-07-15 20:36:12.832444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a44b0 (9): Bad file descriptor 00:19:51.488 [2024-07-15 20:36:12.834470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.488 [2024-07-15 20:36:12.834584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.488 [2024-07-15 20:36:12.834606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eb380 with addr=10.0.0.2, port=4420 00:19:51.488 [2024-07-15 20:36:12.834618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb380 is same with the state(5) to be set 00:19:51.488 [2024-07-15 20:36:12.834635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7eb380 (9): Bad file descriptor 00:19:51.488 [2024-07-15 20:36:12.834650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.488 [2024-07-15 20:36:12.834659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.488 [2024-07-15 20:36:12.834670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.488 [2024-07-15 20:36:12.834686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.488 [2024-07-15 20:36:12.842457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.488 [2024-07-15 20:36:12.842542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.488 [2024-07-15 20:36:12.842563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a44b0 with addr=10.0.0.3, port=4420 00:19:51.488 [2024-07-15 20:36:12.842574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a44b0 is same with the state(5) to be set 00:19:51.488 [2024-07-15 20:36:12.842590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a44b0 (9): Bad file descriptor 00:19:51.488 [2024-07-15 20:36:12.842604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.488 [2024-07-15 20:36:12.842613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.488 [2024-07-15 20:36:12.842623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.488 [2024-07-15 20:36:12.842637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.488 [2024-07-15 20:36:12.844526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.488 [2024-07-15 20:36:12.844600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.488 [2024-07-15 20:36:12.844620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eb380 with addr=10.0.0.2, port=4420 00:19:51.488 [2024-07-15 20:36:12.844630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb380 is same with the state(5) to be set 00:19:51.488 [2024-07-15 20:36:12.844646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7eb380 (9): Bad file descriptor 00:19:51.488 [2024-07-15 20:36:12.844660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.488 [2024-07-15 20:36:12.844669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.488 [2024-07-15 20:36:12.844679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.488 [2024-07-15 20:36:12.844704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.488 [2024-07-15 20:36:12.852510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.488 [2024-07-15 20:36:12.852588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.488 [2024-07-15 20:36:12.852610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a44b0 with addr=10.0.0.3, port=4420 00:19:51.488 [2024-07-15 20:36:12.852620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a44b0 is same with the state(5) to be set 00:19:51.488 [2024-07-15 20:36:12.852636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a44b0 (9): Bad file descriptor 00:19:51.488 [2024-07-15 20:36:12.852650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.488 [2024-07-15 20:36:12.852659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.488 [2024-07-15 20:36:12.852669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.488 [2024-07-15 20:36:12.852683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.488 [2024-07-15 20:36:12.854572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.488 [2024-07-15 20:36:12.854649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.488 [2024-07-15 20:36:12.854670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eb380 with addr=10.0.0.2, port=4420 00:19:51.488 [2024-07-15 20:36:12.854681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb380 is same with the state(5) to be set 00:19:51.488 [2024-07-15 20:36:12.854696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7eb380 (9): Bad file descriptor 00:19:51.488 [2024-07-15 20:36:12.854711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.488 [2024-07-15 20:36:12.854720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.488 [2024-07-15 20:36:12.854729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.488 [2024-07-15 20:36:12.854744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.488 [2024-07-15 20:36:12.862561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.488 [2024-07-15 20:36:12.862646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.488 [2024-07-15 20:36:12.862666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a44b0 with addr=10.0.0.3, port=4420 00:19:51.488 [2024-07-15 20:36:12.862677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a44b0 is same with the state(5) to be set 00:19:51.488 [2024-07-15 20:36:12.862693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a44b0 (9): Bad file descriptor 00:19:51.488 [2024-07-15 20:36:12.862708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.489 [2024-07-15 20:36:12.862717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.489 [2024-07-15 20:36:12.862726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.489 [2024-07-15 20:36:12.862741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.489 [2024-07-15 20:36:12.864621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.489 [2024-07-15 20:36:12.864707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.489 [2024-07-15 20:36:12.864729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eb380 with addr=10.0.0.2, port=4420 00:19:51.489 [2024-07-15 20:36:12.864739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb380 is same with the state(5) to be set 00:19:51.489 [2024-07-15 20:36:12.864755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7eb380 (9): Bad file descriptor 00:19:51.489 [2024-07-15 20:36:12.864769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.489 [2024-07-15 20:36:12.864778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.489 [2024-07-15 20:36:12.864788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.489 [2024-07-15 20:36:12.864802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.489 [2024-07-15 20:36:12.872615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.489 [2024-07-15 20:36:12.872702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.489 [2024-07-15 20:36:12.872723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a44b0 with addr=10.0.0.3, port=4420 00:19:51.489 [2024-07-15 20:36:12.872734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a44b0 is same with the state(5) to be set 00:19:51.489 [2024-07-15 20:36:12.872750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a44b0 (9): Bad file descriptor 00:19:51.489 [2024-07-15 20:36:12.872764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.489 [2024-07-15 20:36:12.872773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.489 [2024-07-15 20:36:12.872782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.489 [2024-07-15 20:36:12.872797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.489 [2024-07-15 20:36:12.874668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.489 [2024-07-15 20:36:12.874743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.489 [2024-07-15 20:36:12.874763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eb380 with addr=10.0.0.2, port=4420 00:19:51.489 [2024-07-15 20:36:12.874773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb380 is same with the state(5) to be set 00:19:51.489 [2024-07-15 20:36:12.874789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7eb380 (9): Bad file descriptor 00:19:51.489 [2024-07-15 20:36:12.874803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.489 [2024-07-15 20:36:12.874812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.489 [2024-07-15 20:36:12.874821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.489 [2024-07-15 20:36:12.874835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.489 [2024-07-15 20:36:12.882663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.489 [2024-07-15 20:36:12.882739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.489 [2024-07-15 20:36:12.882759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a44b0 with addr=10.0.0.3, port=4420 00:19:51.489 [2024-07-15 20:36:12.882770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a44b0 is same with the state(5) to be set 00:19:51.489 [2024-07-15 20:36:12.882785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a44b0 (9): Bad file descriptor 00:19:51.489 [2024-07-15 20:36:12.882799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.489 [2024-07-15 20:36:12.882808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.489 [2024-07-15 20:36:12.882817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.489 [2024-07-15 20:36:12.882832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.489 [2024-07-15 20:36:12.884716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.489 [2024-07-15 20:36:12.884789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.489 [2024-07-15 20:36:12.884809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eb380 with addr=10.0.0.2, port=4420 00:19:51.489 [2024-07-15 20:36:12.884819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb380 is same with the state(5) to be set 00:19:51.489 [2024-07-15 20:36:12.884835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7eb380 (9): Bad file descriptor 00:19:51.489 [2024-07-15 20:36:12.884849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.489 [2024-07-15 20:36:12.884857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.489 [2024-07-15 20:36:12.884867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.489 [2024-07-15 20:36:12.884897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.489 [2024-07-15 20:36:12.892714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.489 [2024-07-15 20:36:12.892790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.489 [2024-07-15 20:36:12.892811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a44b0 with addr=10.0.0.3, port=4420 00:19:51.489 [2024-07-15 20:36:12.892821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a44b0 is same with the state(5) to be set 00:19:51.489 [2024-07-15 20:36:12.892836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a44b0 (9): Bad file descriptor 00:19:51.489 [2024-07-15 20:36:12.892850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.489 [2024-07-15 20:36:12.892859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.489 [2024-07-15 20:36:12.892882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.489 [2024-07-15 20:36:12.892899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.489 [2024-07-15 20:36:12.894761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.489 [2024-07-15 20:36:12.894834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.489 [2024-07-15 20:36:12.894854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eb380 with addr=10.0.0.2, port=4420 00:19:51.489 [2024-07-15 20:36:12.894865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb380 is same with the state(5) to be set 00:19:51.489 [2024-07-15 20:36:12.894894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7eb380 (9): Bad file descriptor 00:19:51.489 [2024-07-15 20:36:12.894909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.489 [2024-07-15 20:36:12.894918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.489 [2024-07-15 20:36:12.894927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.489 [2024-07-15 20:36:12.894942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.489 [2024-07-15 20:36:12.902763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.489 [2024-07-15 20:36:12.902849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.489 [2024-07-15 20:36:12.902882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a44b0 with addr=10.0.0.3, port=4420 00:19:51.489 [2024-07-15 20:36:12.902895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a44b0 is same with the state(5) to be set 00:19:51.489 [2024-07-15 20:36:12.902912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a44b0 (9): Bad file descriptor 00:19:51.489 [2024-07-15 20:36:12.902926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.489 [2024-07-15 20:36:12.902935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.489 [2024-07-15 20:36:12.902944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.489 [2024-07-15 20:36:12.902960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.489 [2024-07-15 20:36:12.904808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.489 [2024-07-15 20:36:12.904905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.489 [2024-07-15 20:36:12.904927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eb380 with addr=10.0.0.2, port=4420 00:19:51.489 [2024-07-15 20:36:12.904938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb380 is same with the state(5) to be set 00:19:51.489 [2024-07-15 20:36:12.904955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7eb380 (9): Bad file descriptor 00:19:51.489 [2024-07-15 20:36:12.904969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.489 [2024-07-15 20:36:12.904978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.489 [2024-07-15 20:36:12.904987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.489 [2024-07-15 20:36:12.905002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.489 [2024-07-15 20:36:12.912818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.489 [2024-07-15 20:36:12.912911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.489 [2024-07-15 20:36:12.912932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a44b0 with addr=10.0.0.3, port=4420 00:19:51.489 [2024-07-15 20:36:12.912943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a44b0 is same with the state(5) to be set 00:19:51.489 [2024-07-15 20:36:12.912959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a44b0 (9): Bad file descriptor 00:19:51.489 [2024-07-15 20:36:12.912974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.489 [2024-07-15 20:36:12.912982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.489 [2024-07-15 20:36:12.912992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.489 [2024-07-15 20:36:12.913006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.489 [2024-07-15 20:36:12.914858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.489 [2024-07-15 20:36:12.914940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.489 [2024-07-15 20:36:12.914960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eb380 with addr=10.0.0.2, port=4420 00:19:51.489 [2024-07-15 20:36:12.914971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb380 is same with the state(5) to be set 00:19:51.489 [2024-07-15 20:36:12.914987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7eb380 (9): Bad file descriptor 00:19:51.489 [2024-07-15 20:36:12.915001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.490 [2024-07-15 20:36:12.915010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.490 [2024-07-15 20:36:12.915019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.490 [2024-07-15 20:36:12.915033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.490 [2024-07-15 20:36:12.922875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.490 [2024-07-15 20:36:12.922950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.490 [2024-07-15 20:36:12.922970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a44b0 with addr=10.0.0.3, port=4420 00:19:51.490 [2024-07-15 20:36:12.922981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a44b0 is same with the state(5) to be set 00:19:51.490 [2024-07-15 20:36:12.922996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a44b0 (9): Bad file descriptor 00:19:51.490 [2024-07-15 20:36:12.923011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.490 [2024-07-15 20:36:12.923020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.490 [2024-07-15 20:36:12.923029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.490 [2024-07-15 20:36:12.923043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.490 [2024-07-15 20:36:12.924913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.490 [2024-07-15 20:36:12.924988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.490 [2024-07-15 20:36:12.925008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eb380 with addr=10.0.0.2, port=4420 00:19:51.490 [2024-07-15 20:36:12.925019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb380 is same with the state(5) to be set 00:19:51.490 [2024-07-15 20:36:12.925043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7eb380 (9): Bad file descriptor 00:19:51.490 [2024-07-15 20:36:12.925057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.490 [2024-07-15 20:36:12.925066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.490 [2024-07-15 20:36:12.925075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.490 [2024-07-15 20:36:12.925089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.490 [2024-07-15 20:36:12.932923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.490 [2024-07-15 20:36:12.933011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.490 [2024-07-15 20:36:12.933031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a44b0 with addr=10.0.0.3, port=4420 00:19:51.490 [2024-07-15 20:36:12.933042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a44b0 is same with the state(5) to be set 00:19:51.490 [2024-07-15 20:36:12.933057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a44b0 (9): Bad file descriptor 00:19:51.490 [2024-07-15 20:36:12.933072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.490 [2024-07-15 20:36:12.933080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.490 [2024-07-15 20:36:12.933090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.490 [2024-07-15 20:36:12.933104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.490 [2024-07-15 20:36:12.934959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:51.490 [2024-07-15 20:36:12.935032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.490 [2024-07-15 20:36:12.935052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eb380 with addr=10.0.0.2, port=4420 00:19:51.490 [2024-07-15 20:36:12.935062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb380 is same with the state(5) to be set 00:19:51.490 [2024-07-15 20:36:12.935077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7eb380 (9): Bad file descriptor 00:19:51.490 [2024-07-15 20:36:12.935092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:51.490 [2024-07-15 20:36:12.935100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:51.490 [2024-07-15 20:36:12.935110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:51.490 [2024-07-15 20:36:12.935124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.490 [2024-07-15 20:36:12.942983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:19:51.490 [2024-07-15 20:36:12.943059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:51.490 [2024-07-15 20:36:12.943079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a44b0 with addr=10.0.0.3, port=4420 00:19:51.490 [2024-07-15 20:36:12.943089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a44b0 is same with the state(5) to be set 00:19:51.490 [2024-07-15 20:36:12.943105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a44b0 (9): Bad file descriptor 00:19:51.490 [2024-07-15 20:36:12.943119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:19:51.490 [2024-07-15 20:36:12.943128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:19:51.490 [2024-07-15 20:36:12.943137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:19:51.490 [2024-07-15 20:36:12.943152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:51.490 [2024-07-15 20:36:12.944411] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:19:51.490 [2024-07-15 20:36:12.944442] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:51.490 [2024-07-15 20:36:12.944476] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:51.490 [2024-07-15 20:36:12.944514] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:19:51.490 [2024-07-15 20:36:12.944530] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:51.490 [2024-07-15 20:36:12.944544] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:51.749 [2024-07-15 20:36:13.030513] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:51.749 [2024-07-15 20:36:13.030599] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:52.380 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:19:52.380 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:52.380 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.380 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:52.380 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.380 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:52.380 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:52.380 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.665 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:19:52.665 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:19:52.665 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:52.665 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:52.665 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:52.665 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:52.665 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.665 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.665 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.665 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:52.665 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:19:52.665 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:19:52.665 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:52.665 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.665 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.665 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:52.665 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:52.665 20:36:13 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.665 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:19:52.665 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:19:52.665 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:19:52.665 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.665 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.665 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:52.665 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:19:52.665 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:19:52.665 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.665 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:19:52.665 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:19:52.665 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:19:52.665 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.665 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.665 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:52.666 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.666 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:19:52.666 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:19:52.666 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:19:52.666 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:19:52.666 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.666 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:52.666 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.666 20:36:14 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:19:52.924 [2024-07-15 20:36:14.210814] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.857 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:54.114 [2024-07-15 20:36:15.358802] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:19:54.114 2024/07/15 20:36:15 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:19:54.114 request: 00:19:54.114 { 00:19:54.114 "method": "bdev_nvme_start_mdns_discovery", 00:19:54.114 "params": { 00:19:54.114 "name": "mdns", 00:19:54.114 "svcname": "_nvme-disc._http", 00:19:54.114 "hostnqn": "nqn.2021-12.io.spdk:test" 00:19:54.114 } 00:19:54.114 } 00:19:54.114 Got JSON-RPC error response 00:19:54.114 GoRPCClient: error on JSON-RPC call 00:19:54.114 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:54.114 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:19:54.114 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:54.114 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:54.114 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:54.114 20:36:15 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:19:54.678 [2024-07-15 20:36:15.947355] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:19:54.678 [2024-07-15 20:36:16.047351] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:19:54.678 [2024-07-15 20:36:16.147373] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:54.678 [2024-07-15 20:36:16.147414] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:54.678 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:54.678 cookie is 0 00:19:54.678 is_local: 1 00:19:54.678 our_own: 0 00:19:54.678 wide_area: 0 00:19:54.678 multicast: 1 00:19:54.678 cached: 1 00:19:54.935 [2024-07-15 20:36:16.247372] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:54.935 [2024-07-15 20:36:16.247417] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:19:54.935 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:54.935 cookie is 0 00:19:54.935 is_local: 1 00:19:54.935 our_own: 0 00:19:54.935 wide_area: 0 00:19:54.935 multicast: 1 00:19:54.935 cached: 1 00:19:54.935 [2024-07-15 20:36:16.247436] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:19:54.935 [2024-07-15 20:36:16.347364] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:19:54.935 [2024-07-15 20:36:16.347406] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:54.935 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:54.935 cookie is 0 00:19:54.935 is_local: 1 00:19:54.935 our_own: 0 00:19:54.935 wide_area: 0 00:19:54.935 multicast: 1 00:19:54.935 cached: 1 00:19:55.193 [2024-07-15 20:36:16.447373] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:19:55.193 [2024-07-15 20:36:16.447417] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:19:55.193 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:19:55.193 cookie is 0 00:19:55.193 is_local: 1 00:19:55.193 our_own: 0 00:19:55.193 wide_area: 0 00:19:55.193 multicast: 1 00:19:55.193 cached: 1 00:19:55.193 [2024-07-15 20:36:16.447436] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:19:55.758 [2024-07-15 20:36:17.153537] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:55.758 [2024-07-15 20:36:17.153574] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:55.758 [2024-07-15 20:36:17.153596] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:55.758 [2024-07-15 20:36:17.239671] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:19:56.014 [2024-07-15 20:36:17.299986] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:19:56.014 [2024-07-15 20:36:17.300026] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:19:56.014 [2024-07-15 20:36:17.353411] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:56.014 [2024-07-15 20:36:17.353437] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:56.015 [2024-07-15 20:36:17.353456] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:56.015 [2024-07-15 20:36:17.439532] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:19:56.015 [2024-07-15 20:36:17.499706] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:19:56.015 [2024-07-15 20:36:17.499738] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.294 [2024-07-15 20:36:20.546138] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:19:59.294 2024/07/15 20:36:20 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:19:59.294 request: 00:19:59.294 { 00:19:59.294 "method": "bdev_nvme_start_mdns_discovery", 00:19:59.294 "params": { 00:19:59.294 "name": "cdc", 00:19:59.294 "svcname": "_nvme-disc._tcp", 00:19:59.294 "hostnqn": "nqn.2021-12.io.spdk:test" 00:19:59.294 } 00:19:59.294 } 00:19:59.294 Got JSON-RPC error response 00:19:59.294 GoRPCClient: error on JSON-RPC call 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.294 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.295 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:19:59.295 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:19:59.295 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:19:59.295 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.295 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:19:59.295 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:19:59.295 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.295 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.295 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.295 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:19:59.295 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.295 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.295 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.295 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:19:59.295 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 94066 00:19:59.295 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 94066 00:19:59.295 [2024-07-15 20:36:20.757069] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 94076 00:19:59.581 Got SIGTERM, quitting. 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:59.581 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:19:59.581 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:19:59.581 avahi-daemon 0.8 exiting. 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:59.581 rmmod nvme_tcp 00:19:59.581 rmmod nvme_fabrics 00:19:59.581 rmmod nvme_keyring 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 94010 ']' 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 94010 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@948 -- # '[' -z 94010 ']' 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # kill -0 94010 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # uname 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94010 00:19:59.581 killing process with pid 94010 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94010' 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@967 -- # kill 94010 00:19:59.581 20:36:20 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # wait 94010 00:19:59.839 20:36:21 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:59.839 20:36:21 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:59.839 20:36:21 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:59.839 20:36:21 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:59.840 20:36:21 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:59.840 20:36:21 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.840 20:36:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.840 20:36:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.840 20:36:21 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:59.840 00:19:59.840 real 0m19.855s 00:19:59.840 user 0m38.807s 00:19:59.840 sys 0m1.893s 00:19:59.840 20:36:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:59.840 20:36:21 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:59.840 ************************************ 00:19:59.840 END TEST nvmf_mdns_discovery 00:19:59.840 ************************************ 00:19:59.840 20:36:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:59.840 20:36:21 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:19:59.840 20:36:21 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:59.840 20:36:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:59.840 20:36:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:59.840 20:36:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:59.840 ************************************ 00:19:59.840 START TEST nvmf_host_multipath 00:19:59.840 ************************************ 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:19:59.840 * Looking for test storage... 00:19:59.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:59.840 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:00.099 Cannot find device "nvmf_tgt_br" 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:00.099 Cannot find device "nvmf_tgt_br2" 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:00.099 Cannot find device "nvmf_tgt_br" 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:00.099 Cannot find device "nvmf_tgt_br2" 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:00.099 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:00.099 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:00.099 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:00.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:20:00.357 00:20:00.357 --- 10.0.0.2 ping statistics --- 00:20:00.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.357 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:00.357 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:00.357 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:20:00.357 00:20:00.357 --- 10.0.0.3 ping statistics --- 00:20:00.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.357 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:00.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:20:00.357 00:20:00.357 --- 10.0.0.1 ping statistics --- 00:20:00.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.357 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=94634 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 94634 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94634 ']' 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.357 20:36:21 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:00.357 [2024-07-15 20:36:21.764419] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:20:00.357 [2024-07-15 20:36:21.764545] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.614 [2024-07-15 20:36:21.902367] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:00.614 [2024-07-15 20:36:21.963261] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.614 [2024-07-15 20:36:21.963309] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.614 [2024-07-15 20:36:21.963319] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.614 [2024-07-15 20:36:21.963327] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.614 [2024-07-15 20:36:21.963335] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.614 [2024-07-15 20:36:21.963429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.614 [2024-07-15 20:36:21.963436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.614 20:36:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:00.614 20:36:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:20:00.614 20:36:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:00.614 20:36:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:00.614 20:36:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:00.614 20:36:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.614 20:36:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94634 00:20:00.614 20:36:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:00.870 [2024-07-15 20:36:22.345733] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.870 20:36:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:01.435 Malloc0 00:20:01.435 20:36:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:20:01.693 20:36:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:01.950 20:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:02.207 [2024-07-15 20:36:23.537469] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.207 20:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:02.465 [2024-07-15 20:36:23.821682] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:02.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:02.465 20:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=94724 00:20:02.465 20:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:02.465 20:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:20:02.465 20:36:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 94724 /var/tmp/bdevperf.sock 00:20:02.465 20:36:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 94724 ']' 00:20:02.465 20:36:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:02.465 20:36:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.465 20:36:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:02.465 20:36:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.465 20:36:23 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:20:03.837 20:36:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.837 20:36:24 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:20:03.837 20:36:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:20:03.838 20:36:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:20:04.095 Nvme0n1 00:20:04.095 20:36:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:20:04.659 Nvme0n1 00:20:04.659 20:36:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:20:04.659 20:36:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:20:05.591 20:36:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:20:05.591 20:36:26 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:05.849 20:36:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:06.415 20:36:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:20:06.415 20:36:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94812 00:20:06.415 20:36:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94634 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:06.415 20:36:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:12.976 20:36:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:12.976 20:36:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:12.976 20:36:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:12.976 20:36:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:12.976 Attaching 4 probes... 00:20:12.976 @path[10.0.0.2, 4421]: 16750 00:20:12.976 @path[10.0.0.2, 4421]: 16985 00:20:12.976 @path[10.0.0.2, 4421]: 17362 00:20:12.976 @path[10.0.0.2, 4421]: 17057 00:20:12.976 @path[10.0.0.2, 4421]: 17269 00:20:12.976 20:36:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:12.976 20:36:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:12.976 20:36:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:12.976 20:36:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:12.976 20:36:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:12.976 20:36:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:12.976 20:36:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94812 00:20:12.976 20:36:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:12.976 20:36:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:20:12.976 20:36:33 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:12.976 20:36:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:13.242 20:36:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:20:13.242 20:36:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94948 00:20:13.242 20:36:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:13.242 20:36:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94634 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:19.794 20:36:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:19.794 20:36:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:19.794 20:36:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:19.794 20:36:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:19.794 Attaching 4 probes... 00:20:19.794 @path[10.0.0.2, 4420]: 16646 00:20:19.794 @path[10.0.0.2, 4420]: 17296 00:20:19.794 @path[10.0.0.2, 4420]: 17539 00:20:19.794 @path[10.0.0.2, 4420]: 16886 00:20:19.794 @path[10.0.0.2, 4420]: 16388 00:20:19.794 20:36:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:19.794 20:36:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:19.794 20:36:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:19.794 20:36:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:19.794 20:36:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:19.794 20:36:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:19.794 20:36:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94948 00:20:19.794 20:36:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:19.794 20:36:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:20:19.794 20:36:40 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:19.794 20:36:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:20.052 20:36:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:20:20.052 20:36:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95080 00:20:20.052 20:36:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94634 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:20.052 20:36:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:26.608 20:36:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:26.608 20:36:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:26.608 20:36:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:26.608 20:36:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:26.608 Attaching 4 probes... 00:20:26.608 @path[10.0.0.2, 4421]: 13667 00:20:26.608 @path[10.0.0.2, 4421]: 16769 00:20:26.608 @path[10.0.0.2, 4421]: 17143 00:20:26.608 @path[10.0.0.2, 4421]: 17266 00:20:26.608 @path[10.0.0.2, 4421]: 16993 00:20:26.608 20:36:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:26.608 20:36:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:26.608 20:36:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:26.608 20:36:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:26.608 20:36:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:26.608 20:36:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:26.608 20:36:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95080 00:20:26.608 20:36:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:26.608 20:36:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:20:26.608 20:36:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:20:26.608 20:36:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:20:27.176 20:36:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:20:27.176 20:36:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94634 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:27.176 20:36:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95216 00:20:27.176 20:36:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:33.728 20:36:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:33.728 20:36:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:20:33.728 20:36:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:20:33.728 20:36:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:33.728 Attaching 4 probes... 00:20:33.728 00:20:33.728 00:20:33.728 00:20:33.728 00:20:33.728 00:20:33.728 20:36:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:33.728 20:36:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:33.728 20:36:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:33.728 20:36:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:20:33.728 20:36:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:20:33.728 20:36:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:20:33.728 20:36:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95216 00:20:33.728 20:36:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:33.728 20:36:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:20:33.728 20:36:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:20:33.728 20:36:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:33.987 20:36:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:20:33.987 20:36:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95347 00:20:33.987 20:36:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94634 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:33.987 20:36:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:40.545 20:37:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:40.545 20:37:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:20:40.545 20:37:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:20:40.545 20:37:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:40.545 Attaching 4 probes... 00:20:40.545 @path[10.0.0.2, 4421]: 16520 00:20:40.545 @path[10.0.0.2, 4421]: 16923 00:20:40.545 @path[10.0.0.2, 4421]: 16999 00:20:40.545 @path[10.0.0.2, 4421]: 16306 00:20:40.545 @path[10.0.0.2, 4421]: 16533 00:20:40.545 20:37:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:40.545 20:37:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:40.545 20:37:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:40.546 20:37:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:20:40.546 20:37:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:20:40.546 20:37:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:20:40.546 20:37:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95347 00:20:40.546 20:37:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:40.546 20:37:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:40.546 [2024-07-15 20:37:01.873381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873434] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873455] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873463] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873472] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873490] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873498] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873507] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873515] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873523] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873540] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873556] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873564] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873581] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873590] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873606] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873654] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873678] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873713] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873721] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873738] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873746] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 [2024-07-15 20:37:01.873754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2289420 is same with the state(5) to be set 00:20:40.546 20:37:01 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:20:41.503 20:37:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:20:41.503 20:37:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95477 00:20:41.503 20:37:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94634 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:41.503 20:37:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:20:48.056 20:37:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:20:48.056 20:37:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:20:48.057 20:37:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:20:48.057 20:37:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:48.057 Attaching 4 probes... 00:20:48.057 @path[10.0.0.2, 4420]: 14797 00:20:48.057 @path[10.0.0.2, 4420]: 16414 00:20:48.057 @path[10.0.0.2, 4420]: 15323 00:20:48.057 @path[10.0.0.2, 4420]: 16773 00:20:48.057 @path[10.0.0.2, 4420]: 16459 00:20:48.057 20:37:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:20:48.057 20:37:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:20:48.057 20:37:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:20:48.057 20:37:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:20:48.057 20:37:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:20:48.057 20:37:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:20:48.057 20:37:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95477 00:20:48.057 20:37:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:48.057 20:37:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:48.057 [2024-07-15 20:37:09.442801] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:48.057 20:37:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:20:48.314 20:37:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:20:54.868 20:37:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:20:54.868 20:37:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95677 00:20:54.868 20:37:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94634 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:20:54.868 20:37:15 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:01.454 20:37:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:01.454 20:37:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:01.454 Attaching 4 probes... 00:21:01.454 @path[10.0.0.2, 4421]: 16443 00:21:01.454 @path[10.0.0.2, 4421]: 16860 00:21:01.454 @path[10.0.0.2, 4421]: 16951 00:21:01.454 @path[10.0.0.2, 4421]: 16498 00:21:01.454 @path[10.0.0.2, 4421]: 16685 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95677 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 94724 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94724 ']' 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94724 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94724 00:21:01.454 killing process with pid 94724 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94724' 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94724 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94724 00:21:01.454 Connection closed with partial response: 00:21:01.454 00:21:01.454 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 94724 00:21:01.454 20:37:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:01.454 [2024-07-15 20:36:23.906449] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:21:01.454 [2024-07-15 20:36:23.906564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94724 ] 00:21:01.454 [2024-07-15 20:36:24.043106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.454 [2024-07-15 20:36:24.126568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.454 Running I/O for 90 seconds... 00:21:01.454 [2024-07-15 20:36:34.468827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.454 [2024-07-15 20:36:34.468932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:01.454 [2024-07-15 20:36:34.471482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.454 [2024-07-15 20:36:34.471528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:01.454 [2024-07-15 20:36:34.471563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.454 [2024-07-15 20:36:34.471582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:01.454 [2024-07-15 20:36:34.471605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.454 [2024-07-15 20:36:34.471621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:01.454 [2024-07-15 20:36:34.471643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.454 [2024-07-15 20:36:34.471659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:01.454 [2024-07-15 20:36:34.471681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.454 [2024-07-15 20:36:34.471696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.471718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.471733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.471755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.471770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.471792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.471807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.471828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.471844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.471878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.471918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.471944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.471960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.471982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.471998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.472019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.472035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.472057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.472072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.472093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.472109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.472131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.472146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.472168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.472183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.472205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.472221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.472243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.472258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.472279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.472295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.472316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.472331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.472353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.472369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.472399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.472416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.472438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.472454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.472477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.472499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.472538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.472563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.472588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.472615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.472649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.472674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.472729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.472748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.473729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.473761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.473791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.473808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.473831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.473848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.473886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.473905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.473928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.473945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.473980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.473998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.474020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.474036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.474057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.474073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.474095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.474111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.474133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.474149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.474171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.474187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.474209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.474225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.474247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.474263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.474286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.474302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.474324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.474340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.474362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.474378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.474400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.455 [2024-07-15 20:36:34.474416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:01.455 [2024-07-15 20:36:34.474437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.474461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.474485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.474500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.474523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.474539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.474561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.474577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.474599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.474615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.474638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.474654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.474676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.474692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.474714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.474730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.474752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.474768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.474791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.474807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.474829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.474845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.474879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.474897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.474921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.474944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.474968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.474984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.475006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.475022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.475044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.475060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.475083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.475099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.475121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.475137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.475160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.475176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.475198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.475213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.475235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.475251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.475273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.475292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.475315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.475330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.475354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.475370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.475392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.475408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.475441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.475458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.475480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.456 [2024-07-15 20:36:34.475496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.475518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.475534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.475557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.475573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.475594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.475610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.475632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.475648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.475670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.475686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.475707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.475723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.475746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.475766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.476486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.476519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.476551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.476582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.476613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.476639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.476688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.476723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.476747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.476764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.476789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.476812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.476836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.456 [2024-07-15 20:36:34.476859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:01.456 [2024-07-15 20:36:34.476900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.476917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.476939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.476955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.476978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.476993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.477960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.477986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.478003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.478025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.478041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.478069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.478089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.478112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.478128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.478151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.478167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.478190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.478206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.478229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.478244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.478276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.478293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.478316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.478331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.478353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.478369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.478395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.478412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.478439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.478456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.478478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.478495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.478517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.478533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.478567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.478585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.478607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.457 [2024-07-15 20:36:34.478629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:01.457 [2024-07-15 20:36:34.478652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.478668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.478690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.478706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.478730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.478754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.478786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.478808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.478843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.478860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.478897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.478915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.478938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.478964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.478991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.479008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.479041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.479060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.479083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.479100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.479126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.479142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.479164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.479180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.479201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.479217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.479240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.479255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.479277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.479292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.479314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.479339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.479363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.479379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.479401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.479416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.479439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.479454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.479476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.479492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.479514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.479529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.479551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.479567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.479589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.479607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.480592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.480623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.480652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.480670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.480706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.480728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.480753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.480782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.480819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.480858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.480919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.480948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.480982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.480999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.481026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.481056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.481083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.481102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.481139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.481160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:01.458 [2024-07-15 20:36:34.481183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.458 [2024-07-15 20:36:34.481216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.481248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.481272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.481310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.481331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.481354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.481380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.481415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.481432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.481461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.481486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.481511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.481528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.481581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.481603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.481625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.481642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.481664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.481680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.481702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.481718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.481740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.481756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.481778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.481794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.481816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.481831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.481853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.481884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.481909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.481926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.481948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.481965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.481987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.459 [2024-07-15 20:36:34.482822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:01.459 [2024-07-15 20:36:34.482978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.459 [2024-07-15 20:36:34.482993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.483016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.483039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.483064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.483080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.483787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.483827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.483861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.483913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.483941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.483970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.484006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.484024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.484053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.484079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.484103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.484126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.484166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.484187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.484210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.484235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.484267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.484286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.484323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.484346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.484373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.484394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.484446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.484472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.484512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.484532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.484554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.484581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.484617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.484640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.484677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.484713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.484741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.484771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.484797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.484828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.484865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.484900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.484936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.484958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.484984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.485013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.485042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.485059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.485387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.485593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.485647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.485667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.485691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.485708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.485731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.485747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.485773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.485790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.485813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.485830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.485853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.485888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.485915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.485932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.485954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.485970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.485992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.486008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.486031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.486047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.486069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.486085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.486107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.486123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.486146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.486171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.486194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.486210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.486233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.486249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.486271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.486287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:01.460 [2024-07-15 20:36:34.486309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.460 [2024-07-15 20:36:34.486325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.486347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.486363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.486384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.486400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.486424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.486440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.486462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.486478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.486500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.486517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.486540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.486556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.486578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.486594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.486615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.486641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.486664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.500299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.500366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.500387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.500412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.500428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.500450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.500467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.500489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.500505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.500528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.500545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.500582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.500612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.500645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.500667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.500710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.500731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.500754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.500770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.500802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.500821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.500855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.500895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.500936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.500954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.500977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.500992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.501026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.501050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.501083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.501102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.501125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.501142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.501164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.501179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.501202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.501224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.501247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.501264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.501285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.501301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.501323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.501339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.501361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.501377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.501400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.501416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.502410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.502451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.502483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.502501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.502523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.502540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.502562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.502578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.502600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.502616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.502638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.502654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.502675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.502691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.502712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.502728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.502750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.502766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:01.461 [2024-07-15 20:36:34.502788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.461 [2024-07-15 20:36:34.502804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.502826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.502842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.502864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.502896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.502920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.502946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.502969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.502986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.503968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.503990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.504005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.504027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.504043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.504066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.504091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.504113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.504129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.504151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.504167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.504188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.504204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.504226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.504241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:01.462 [2024-07-15 20:36:34.504263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.462 [2024-07-15 20:36:34.504279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.504300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.504316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.504338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.504353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.504375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.504390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.504412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.463 [2024-07-15 20:36:34.504434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.504458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.504474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.504495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.504510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.504533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.504548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.504570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.504586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.504609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.504625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.505350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.505380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.505408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.505426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.505449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.505465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.505487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.505503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.505525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.505540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.505562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.505578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.505600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.505627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.505651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.505666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.505688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.505704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.505726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.505741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.505765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.505790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.505814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.505830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.505852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.505886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.505939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.505963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.505992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.506018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.506042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.506059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.506081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.506097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.506119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.506135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.506157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.506173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.506208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.506233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.506265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.506282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.506305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.506321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.506343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.506359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.506380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.506396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.506420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.506448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.506476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.506494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.506516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.506532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.506553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.506569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.506591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.506607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.506629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.506644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.506666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.506682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.506714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.463 [2024-07-15 20:36:34.506734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:01.463 [2024-07-15 20:36:34.506756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.506779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.506818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.506846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.506902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.506933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.506969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.506997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.507375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.507414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.507442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.507458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.507480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.507497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.507520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.507535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.507557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.507573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.507595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.507611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.507633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.507649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.507670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.507699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.507723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.507739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.507761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.507776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.507798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.507814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.507841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.507862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.507920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.507939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.507962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.507978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.508905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.508926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.509783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.509812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.509840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.464 [2024-07-15 20:36:34.509857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:01.464 [2024-07-15 20:36:34.509909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.509931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.509954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.509971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.509994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.510972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.510993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.511009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.511032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.511047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.511069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.511084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.511112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.511145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.511169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.511186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.511207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.511223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.511246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.511261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.511283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.511302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.511338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.511357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.511380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.511396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.511419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.511434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.511456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.511472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.511494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.511510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.511532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.511547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.511569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.465 [2024-07-15 20:36:34.511585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:01.465 [2024-07-15 20:36:34.511606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.511622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.511653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.511669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.511691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.511707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.511729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.511745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.511767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.511783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.511804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.511820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.511842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.466 [2024-07-15 20:36:34.511857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.511905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.511921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.511943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.511962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.511992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.512009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.512186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.512215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.512955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.512986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.513975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.513998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.514014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.514036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.514052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.514074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.514106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.514131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.514147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.514169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.514184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.514206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.514222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.514244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.466 [2024-07-15 20:36:34.514260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:01.466 [2024-07-15 20:36:34.514281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.514296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.524997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.525085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.525144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.525180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.525230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.525280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.525331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.525366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.525413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.525446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.525494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.525528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.525593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.525632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.525708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.525743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.525796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.525843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.525950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.525995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.526054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.526087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.526135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.526168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.526216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.526248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.526295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.526328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.526376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.526409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.526457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.526489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.526537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.526569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.526629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.526663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.526724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.526758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.526843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.526926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.526982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:55344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.527018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.527077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.527112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.527161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.527194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.527249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.527288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.527363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.527397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.527445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:55384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.527479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.527527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.527561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.527609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.527641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.527689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:55408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.527722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.527770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.527803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.527850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.527919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.527983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.528035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.528092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.528129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.528185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.467 [2024-07-15 20:36:34.528221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:01.467 [2024-07-15 20:36:34.528269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.528313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.528364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.528408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.528457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.528490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.528539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.528572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.528620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.528652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.528735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.528776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.530753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.530802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.530897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.530931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.530969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.530993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.531022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.531059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.531091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.531112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.531141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.531161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.531190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.531210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.531239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.531259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.531288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.531316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.531361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.531391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.531423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.531447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.531484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.531505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.531538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.531573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.531606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.531628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.531662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.531687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.531717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.531738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.531793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.531816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.531846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.531893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.531926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.531947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.531988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.532012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.532042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.532062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.532091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.532111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.532140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.532160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.532189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.532209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.532238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.532258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.532287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.532308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.532337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.532356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.532386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.532405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.532446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.532467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.532500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.532524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.532555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.532575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.532606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.532632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.532662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.532683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.532751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.532776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.532806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.532832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:01.468 [2024-07-15 20:36:34.532862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.468 [2024-07-15 20:36:34.532899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:34.532934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.469 [2024-07-15 20:36:34.532956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:34.532986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.469 [2024-07-15 20:36:34.533006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:34.533046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:55808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.469 [2024-07-15 20:36:34.533068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:34.533108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:55816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.469 [2024-07-15 20:36:34.533135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:34.533165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:55824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.469 [2024-07-15 20:36:34.533197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:34.533229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.469 [2024-07-15 20:36:34.533250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:34.533279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.469 [2024-07-15 20:36:34.533299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:34.533328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.469 [2024-07-15 20:36:34.533349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:34.533378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.469 [2024-07-15 20:36:34.533398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:34.533427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.469 [2024-07-15 20:36:34.533447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:34.533477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.469 [2024-07-15 20:36:34.533497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:34.533526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.469 [2024-07-15 20:36:34.533546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:34.533575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.469 [2024-07-15 20:36:34.533595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:34.533624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.469 [2024-07-15 20:36:34.533644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:34.533677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.469 [2024-07-15 20:36:34.533709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:34.533750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:34.533772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:34.533801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.469 [2024-07-15 20:36:34.533832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:34.533895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.469 [2024-07-15 20:36:34.533923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:34.533956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.469 [2024-07-15 20:36:34.533988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:34.537216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.469 [2024-07-15 20:36:34.537269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.091324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.469 [2024-07-15 20:36:41.091394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.091432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.091450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.091473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.091490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.091513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.091528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.091551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.091566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.091589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.091604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.091627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.091642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.091665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.091681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.091705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.091721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.091774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.091790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.091812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.091828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.091850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.091865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.091903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.091919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.091941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.091956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.091978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.091994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.092016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.092031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.092054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.092069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.092091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.092106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.092129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.092144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.092166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.092182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.092204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.092220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:01.469 [2024-07-15 20:36:41.092251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.469 [2024-07-15 20:36:41.092268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.092612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.092639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.092667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.092684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.092723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.092740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.092762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.092778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.092800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.092815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.092838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.092853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.092888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.092906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.092929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.092944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.092966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.092982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.093971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.093993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.094008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.094030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.094052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.094075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.094091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.094113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.094128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.094150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.094166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.094187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.094203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.094226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.094242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:01.470 [2024-07-15 20:36:41.094264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.470 [2024-07-15 20:36:41.094280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.094311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.094327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.094349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.094364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.094386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.094401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.094423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.094439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.094461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.094476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.094498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.094520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.094544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.094560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.094582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.094597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.094619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.094634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.094657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.094672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.094694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.094709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.094731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.094746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.094768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.094783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.094804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.094819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.094842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.094857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.094890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.094929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.094953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.094969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.094992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.095007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.095038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.095054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.095077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.095092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.095115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.095130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.095152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.471 [2024-07-15 20:36:41.095168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.095191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.471 [2024-07-15 20:36:41.095207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.095228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.471 [2024-07-15 20:36:41.095244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.095266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.471 [2024-07-15 20:36:41.095281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.095309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.471 [2024-07-15 20:36:41.095325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.095346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.471 [2024-07-15 20:36:41.095362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.095384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.471 [2024-07-15 20:36:41.095399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.095421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.471 [2024-07-15 20:36:41.095436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.095464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.471 [2024-07-15 20:36:41.095479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.095508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.471 [2024-07-15 20:36:41.095524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.095546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.471 [2024-07-15 20:36:41.095561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.095583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.471 [2024-07-15 20:36:41.095598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.095620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.471 [2024-07-15 20:36:41.095635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.095657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.471 [2024-07-15 20:36:41.095672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:01.471 [2024-07-15 20:36:41.095694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.095709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.095731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.095746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.095768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.095783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.095805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.095821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.095843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.095858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.095896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.095914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.095939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.095955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.095977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.096000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.096023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.096038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.096061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.096076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.096954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.096983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.097029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.097067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.097104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.097142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.097180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.097217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.097255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.097293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.097342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.097382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.097419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.097457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.097495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.097533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.097570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.097607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.097645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.097682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.472 [2024-07-15 20:36:41.097719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.472 [2024-07-15 20:36:41.097757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.472 [2024-07-15 20:36:41.097801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.472 [2024-07-15 20:36:41.097841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.472 [2024-07-15 20:36:41.097893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.472 [2024-07-15 20:36:41.097934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.472 [2024-07-15 20:36:41.097971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.097993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.472 [2024-07-15 20:36:41.098008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.098031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.472 [2024-07-15 20:36:41.098046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.098068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.472 [2024-07-15 20:36:41.098083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.098106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.472 [2024-07-15 20:36:41.098121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.098143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.472 [2024-07-15 20:36:41.098159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.098181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.472 [2024-07-15 20:36:41.098196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:01.472 [2024-07-15 20:36:41.098218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.098234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.098256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.098271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.098301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.098318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.098340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.098355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.098377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.098392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.098414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.098430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.098452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.098467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.098490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.098505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.098979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.099979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.099995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.100017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.100032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.100054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.100069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.100091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.100107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.100129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.100144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.100165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.100188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.100211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.100227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.100249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.100268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:01.473 [2024-07-15 20:36:41.100290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.473 [2024-07-15 20:36:41.100306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.100329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.100344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.100366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.100381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.100403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.100419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.100441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.100456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.100478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.100493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.100515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.100531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.100553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.100568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.100590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.100605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.100627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.100651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.100674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.100690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.100726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.100742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.100764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.100779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.100802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.100818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.100840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.100855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.100887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.100907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.100930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.100946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.100968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.100984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.101006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.101021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.101043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.101059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.101081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.101096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.101118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.101134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.101164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.101181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.101203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.101219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.101241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.101256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.101278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.101294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.101316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.101332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.101354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.101369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.101391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.101407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.101429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.101450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.101473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.101488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.102153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.474 [2024-07-15 20:36:41.102184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.102212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.474 [2024-07-15 20:36:41.102230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.102254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.474 [2024-07-15 20:36:41.102270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.102306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.474 [2024-07-15 20:36:41.102323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.102345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.474 [2024-07-15 20:36:41.102360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.102383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.474 [2024-07-15 20:36:41.102399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.102421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.474 [2024-07-15 20:36:41.102437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.102458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.474 [2024-07-15 20:36:41.102474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.102496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.474 [2024-07-15 20:36:41.102512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.102534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.474 [2024-07-15 20:36:41.102549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.102571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.474 [2024-07-15 20:36:41.102587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:01.474 [2024-07-15 20:36:41.102609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.474 [2024-07-15 20:36:41.102625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.102647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.102662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.119750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.119809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.119846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.119885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.119944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.119968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.119999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.120021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.120053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.120074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.120106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.120127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.120157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.120178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.120209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.120231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.120261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.120282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.120312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.120333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.120364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.120385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.120415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.120436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.120467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.120488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.120518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.120539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.120570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.120602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.120634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.120656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.120686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.120728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.120761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.120783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.120814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.120835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.120866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.120903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.120936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.120958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.120989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.121011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.121041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.121069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.121100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.121121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.121151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.121172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.121203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.121225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.121255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.121286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.121320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.121341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.121372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.121394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.121424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.121445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.121476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.475 [2024-07-15 20:36:41.121497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.121528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.475 [2024-07-15 20:36:41.121549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.121580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.475 [2024-07-15 20:36:41.121601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.121632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.475 [2024-07-15 20:36:41.121654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:01.475 [2024-07-15 20:36:41.121685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.475 [2024-07-15 20:36:41.121706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.121737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.121758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.121789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.121810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.121841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.121862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.121908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.121930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.121972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.121994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.122025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.122046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.122077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.122099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.122130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.122151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.122181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.122202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.122233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.122254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.122284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.122305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.122337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.122358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.122389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.122410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.122441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.122462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.122494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.122515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.123747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.123790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.123846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.123890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.123926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.123948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.123980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.124002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.124033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.124054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.124085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.124107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.124138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.124160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.124190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.124211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.124241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.124263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.124294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.124314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.124345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.124366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.124396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.124417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.124448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.124469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.124512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.124535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.124566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.124587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.124618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.124639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.124669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.124690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.124739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.124760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.124790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.124820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.124851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.124887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.124921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.124942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.124973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.124995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.125026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.125047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.125078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.125099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.125130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.125151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.125181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.476 [2024-07-15 20:36:41.125213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:01.476 [2024-07-15 20:36:41.125246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.125267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.125298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.125320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.125350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.125371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.125402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.125423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.125454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.125475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.125506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.125527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.125558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.125579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.125610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.125631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.125662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.125683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.125714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.125736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.125767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.125787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.125818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.125848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.125893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.125917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.125948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.125969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.126000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.126022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.126062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.126083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.126114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.126136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.126167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.126188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.126218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.126239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.126270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.126291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.126322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.126343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.126374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.126396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.126426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.126447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.126478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.126507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.126540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.126561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.126592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.126614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.126650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.126671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.126702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.126723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.126754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.126775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.126805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.126827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.126857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.126899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.126932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.126953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.126985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.127006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.127036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.127057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.127087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.127109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.127139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.127161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.127201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.127223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.127255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.127276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.128185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.128223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.128265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.477 [2024-07-15 20:36:41.128288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.128320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.477 [2024-07-15 20:36:41.128342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.128372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.477 [2024-07-15 20:36:41.128394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:01.477 [2024-07-15 20:36:41.128425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.128446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.128477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.128498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.128529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.128550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.128581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.128602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.128633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.128654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.128684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.128730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.128779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.128802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.128847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.128863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.128897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.128916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.128938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.128954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.128976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.128991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.129981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.129996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.130019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.130034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.130056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.130071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.130095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.130112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.130134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.478 [2024-07-15 20:36:41.130150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:01.478 [2024-07-15 20:36:41.130171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.478 [2024-07-15 20:36:41.130187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.130209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.130225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.130254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.130270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.130292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.130308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.130330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.130345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.130367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.130382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.130404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.130419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.130441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.130457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.130478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.130493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.130516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.130531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.130553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.130568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.130590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.130605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.130627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.130648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.130670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.130685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.130718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.130735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.130757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.130772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.130794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.130809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.130832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.130847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.131589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.131618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.131646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.131664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.131687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.131702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.131724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.131740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.131761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.131777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.131799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.131815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.131837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.131852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.131890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.131909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.131942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.131959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.131981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.131997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.132019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.132037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.132059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.132075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.132099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.132115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.132137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.132153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.132175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.132190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.132212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.132228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.132252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.132268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.132290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.132305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:01.479 [2024-07-15 20:36:41.132328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.479 [2024-07-15 20:36:41.132343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.132365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.132381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.132403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.132425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.132448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.132464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.132486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.132501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.132524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.132539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.132561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.132577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.132599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.132614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.132636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.132654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.132676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.132701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.132729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.132751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.132773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.132788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.132810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.132826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.132848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.132863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.132897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.132922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.132946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.132962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.132983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.132999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.133021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.133037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.145959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.146024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.146073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.146105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.146166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.146197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.146241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.146270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.146314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.146345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.146389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.146419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.146463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.146495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.146539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.146570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.146615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.146669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.146717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.146748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.146793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.146823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.146867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.146931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.146980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.147011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.147055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.147085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.147130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.147160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.147205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.147235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.147278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.147308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.147353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.147383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.147427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.147457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.147501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.147531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.147576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.147606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.147667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.147699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.147743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.147774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.147819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.147849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:01.480 [2024-07-15 20:36:41.147926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.480 [2024-07-15 20:36:41.147964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.148036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.481 [2024-07-15 20:36:41.148078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.148125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.481 [2024-07-15 20:36:41.148155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.148201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.481 [2024-07-15 20:36:41.148232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.148832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.481 [2024-07-15 20:36:41.148907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.149013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.481 [2024-07-15 20:36:41.149052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.149104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.481 [2024-07-15 20:36:41.149135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.149188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.149217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.149276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.149304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.149377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.149408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.149459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.149488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.149538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.149567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.149618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.149647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.149697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.149726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.149776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.149806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.149857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.149906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.149959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.149989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.150040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.150069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.150119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.150147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.150198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.150227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.150278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.150306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.150356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.150400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.150452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.150482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.150542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.150571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.150621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.150649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.150700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.150729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.150779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.150808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.150858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.150905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.150957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.150986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.151037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.151065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.151116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.151145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.151196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.151225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.151275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.151304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.151355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.151395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.151447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.151476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.151535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.151574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.151626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.151655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.151705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.151734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.151785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.151814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.151864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.151914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.151966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.151995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.152046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.152075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.481 [2024-07-15 20:36:41.152125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.481 [2024-07-15 20:36:41.152153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.152204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.482 [2024-07-15 20:36:41.152233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.152284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.482 [2024-07-15 20:36:41.152312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.152363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.482 [2024-07-15 20:36:41.152421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.152479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.482 [2024-07-15 20:36:41.152509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.152559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.482 [2024-07-15 20:36:41.152588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.152638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.482 [2024-07-15 20:36:41.152666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.152747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.482 [2024-07-15 20:36:41.152769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.152803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:41.152822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.152855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:41.152874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.152923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:41.152943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.152975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:41.152994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.153027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:41.153046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.153079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:41.153098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.153131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:41.153150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.153183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:41.153201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.153244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:106592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:41.153264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.153303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:41.153322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.153355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:106608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:41.153373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.153406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:106616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:41.153425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.153458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:106624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:41.153477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.153510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:41.153529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.153562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:41.153581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.153613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:41.153632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.153666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:41.153685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:41.153933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:41.153963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:48.402103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:48.402198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:48.402256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:48.402288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:48.402362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:48.402384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:48.402407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:48.402423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:48.402444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:48.402463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:48.402515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:48.402545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:48.402583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:48.402622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:48.402661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:48.402691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:48.402738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.482 [2024-07-15 20:36:48.402755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:48.402777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.482 [2024-07-15 20:36:48.402793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:48.402816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.482 [2024-07-15 20:36:48.402849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:48.402909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.482 [2024-07-15 20:36:48.402940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:48.402977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.482 [2024-07-15 20:36:48.402996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:48.403019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.482 [2024-07-15 20:36:48.403034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:48.403061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.482 [2024-07-15 20:36:48.403113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:48.403152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.482 [2024-07-15 20:36:48.403183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:01.482 [2024-07-15 20:36:48.403232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.482 [2024-07-15 20:36:48.403264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.403312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.403350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.403382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.403399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.403421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.403437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.403459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.403476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.403513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.403542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.403581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.403610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.403648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.403679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.403717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.403742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.403778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.403809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.403853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.403899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.403925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.403941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.403964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.403979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.404002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.404023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.404060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.404090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.404129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.404158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.404196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.404225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.404264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.404284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.404306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.404322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.404344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.404359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.404381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.404402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.404440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.404469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.404507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.404532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.404567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.404584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.405275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.405309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.405339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.405357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.405381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.405407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.405445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.405476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.405512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.405532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.405554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.405570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.405603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.405632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.405668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.405685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.405711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.405739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.405779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.405810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.405847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.405881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.405919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.405936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:01.483 [2024-07-15 20:36:48.405959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.483 [2024-07-15 20:36:48.405984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.406022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.484 [2024-07-15 20:36:48.406053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.406103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.406121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.406145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.406168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.406205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.406236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.406272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.406302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.406343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.406363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.406385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.406401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.406423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.406438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.406461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.406476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.406502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.406531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.406570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.406611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.406652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.406682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.406722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.406753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.406795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.406815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.406837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.406852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.406887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.406911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.406938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.406968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.407008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.407047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.407082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.407100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.407122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.407143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.407191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.407222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.407255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.407272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.407302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.407317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.407350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.407378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.407427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.407456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.407494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.407533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.407572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.407603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.407648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.407677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.407741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.407786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.407821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.407840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.407863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.407945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.408014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.408045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.408085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.408103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.408166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.408219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.408275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.408294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.408343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.408361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.408395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.408425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.408465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.408494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.408534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.408564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.408603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.408632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.408670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.408692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:01.484 [2024-07-15 20:36:48.408727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.484 [2024-07-15 20:36:48.408744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.408765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.408781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.408802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.408818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.408840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.408865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.408926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.408956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.408986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.409003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.409024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.409050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.409078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.409103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.409143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.409173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.409212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.409235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.409259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.409275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.409297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.409312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.409335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.409361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.409400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.409431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.409471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.409501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.409540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.409570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.409606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.409626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.409648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.409663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.409685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.409711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.409737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.409767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.409806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.409835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.409890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.409913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.409936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.409952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.409977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.410003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.410043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.410073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.410104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.410121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.410143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.410159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.411113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.411158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.411195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.485 [2024-07-15 20:36:48.411213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.411236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.485 [2024-07-15 20:36:48.411252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.411275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.485 [2024-07-15 20:36:48.411302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.411357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.485 [2024-07-15 20:36:48.411396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.411434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.485 [2024-07-15 20:36:48.411466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.411504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.485 [2024-07-15 20:36:48.411526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.411550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.411566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.411588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.411603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.411628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.411656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.411693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.411712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.411736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.411764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.411806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.411838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.412689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.412782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.412830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.412864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:01.485 [2024-07-15 20:36:48.412947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.485 [2024-07-15 20:36:48.413008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.413113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.413140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.413176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.413197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.413250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.413301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.413340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.413379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.413423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.413461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.413524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.413557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.413581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.413605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.413645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.413676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.413714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.413744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.413782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.413802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.413836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.413856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.413895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.413914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.413937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.413971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.414011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.414041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.414070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.414099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.414138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.414167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.414202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.414219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.414243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.414259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.414281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.414296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.414324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.414353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.414391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.414425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.414464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.414492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.414531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.414558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.414583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.414599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.414620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.414646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.414683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.414713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.414751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.414780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.414804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.414821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.414845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.414889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.414936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.414965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.415676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.415721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.415770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.415802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.415853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.415906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.415934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.415951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.415973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.415988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.416011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.416031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.416068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.416094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.416152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.416183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.416222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.416251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.416291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.416311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.416334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.416350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:01.486 [2024-07-15 20:36:48.416372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.486 [2024-07-15 20:36:48.416388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.416410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.487 [2024-07-15 20:36:48.416435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.416474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.487 [2024-07-15 20:36:48.416503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.416537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.487 [2024-07-15 20:36:48.416553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.416575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.416591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.416614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.416637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.416675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.416718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.416755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.416773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.416806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.416823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.416845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.416861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.416903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.416921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.416956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.416984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.417014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.417031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.417073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.417100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.417140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.417169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.417208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.417238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.417275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.417306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.417344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.417372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.417398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.417442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.417479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.417499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.417522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.417578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.417632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.417682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.417742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.417766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.417818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.417887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.417952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.417986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.418010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.418045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.418071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.418087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.418109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.418126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.418160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.418189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.418227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.418257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.418296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.418325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.418365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.418396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.418431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.418460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.418484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.418500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.418523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.418539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.418566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.418594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.418635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.487 [2024-07-15 20:36:48.418665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:01.487 [2024-07-15 20:36:48.418698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.418715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.418737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.418752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.418780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.418809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.418848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.418895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.418922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.418939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.418962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.418977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.418999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.419015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.419051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.419081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.419135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.419166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.419206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.419237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.419275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.419292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.419314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.419329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.419351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.419367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.419399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.419429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.419472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.419495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.419518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.419534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.419556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.419572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.419608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.419638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.419667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.419695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.419733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.419765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.419808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.419827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.419848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.419864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.419904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.419920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.419942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.419958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.419991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.420019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.420058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.420089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.420128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.420158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.420196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.420226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.420258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.420276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.420298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.420313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.420336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.420352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.420374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.420394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.420432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.420476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.421407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.421437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.421470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.421501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.421531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.488 [2024-07-15 20:36:48.421549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.421583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.488 [2024-07-15 20:36:48.421613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.421652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.488 [2024-07-15 20:36:48.421683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.421725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.488 [2024-07-15 20:36:48.421747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.421770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.488 [2024-07-15 20:36:48.421786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.421808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.488 [2024-07-15 20:36:48.421826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.421864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.488 [2024-07-15 20:36:48.421916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:01.488 [2024-07-15 20:36:48.421953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.489 [2024-07-15 20:36:48.421973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.422008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.489 [2024-07-15 20:36:48.422028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.422053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.489 [2024-07-15 20:36:48.422098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.422140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.489 [2024-07-15 20:36:48.422164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.422188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.489 [2024-07-15 20:36:48.422204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.422226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.489 [2024-07-15 20:36:48.422242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.422263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.489 [2024-07-15 20:36:48.422279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.422312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.489 [2024-07-15 20:36:48.422333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.422358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.422386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.422427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.422457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.422497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.422688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.422740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.422773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.422801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.422817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.422839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.422855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.422904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.422938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.422989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.423008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.423030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.423046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.423068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.423084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.423114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.423144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.423183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.423212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.423254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.423283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.423312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.423328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.423350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.423366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.423388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.423413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.423582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.423622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.423664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.423796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.423840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.423891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.423937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.423985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.424027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.424051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.424075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.424090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.424113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.424128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.424152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.424181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.424220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.424250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.424276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.424292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.424314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.424329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.424356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.424384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.424426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.424454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.425111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.425142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.425172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.425189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.425219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.425265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.425309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.425333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.425356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.489 [2024-07-15 20:36:48.425372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:01.489 [2024-07-15 20:36:48.425394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.490 [2024-07-15 20:36:48.425414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.425451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.490 [2024-07-15 20:36:48.425480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.425519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.490 [2024-07-15 20:36:48.425548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.425588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.490 [2024-07-15 20:36:48.425619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.425651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.490 [2024-07-15 20:36:48.425668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.425689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.490 [2024-07-15 20:36:48.425705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.425727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.490 [2024-07-15 20:36:48.425751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.425786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.490 [2024-07-15 20:36:48.425804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.425839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.490 [2024-07-15 20:36:48.425888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.425933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.490 [2024-07-15 20:36:48.425964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.426012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.490 [2024-07-15 20:36:48.426030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.426052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.426068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.426102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.426132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.426172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.426196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.426221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.426238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.426270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.426299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.426339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.426362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.426386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.426402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.426424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.426445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.426484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.426514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.426553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.426582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.426621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.426650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.426704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.426735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.426772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.426792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.426814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.426830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.426853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.426886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.426922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.426951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.426991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.427015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.427039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.427055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.427078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.427098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.427136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.427168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.427206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.427236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.427269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.427293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.427339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.427358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.427381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.427407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.427432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.427459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.427498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.427529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.427569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.427597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.427636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.427666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.427699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.427717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.427739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.427755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.427777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.427798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:01.490 [2024-07-15 20:36:48.427835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.490 [2024-07-15 20:36:48.427887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.427922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.427939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.427961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.427977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.428010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.428039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.428079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.428119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.428146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.428162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.428184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.428203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.428241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.428270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.428309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.428339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.428377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.428406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.428445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.428468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.428491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.428507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.428530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.428545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.428573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.428601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.428641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.428667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.428709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.428735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.428772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.428804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.428855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.428891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.428916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.428933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.428968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.428999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.429041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.429062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.429085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.429117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.429155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.429186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.429224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.429254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.429294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.429318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.429343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.429359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.429381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.429396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.429423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.429452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.429492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.429521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.429576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.429608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.429656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.429680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.429704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.429725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.429763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.429795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.429830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.429848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.430849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.430915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.430964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.430986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.431011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.491 [2024-07-15 20:36:48.431027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.431049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.491 [2024-07-15 20:36:48.431072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.431112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.491 [2024-07-15 20:36:48.431142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.431178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.491 [2024-07-15 20:36:48.431197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.431222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.491 [2024-07-15 20:36:48.431252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:01.491 [2024-07-15 20:36:48.431293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.431343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.431385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.431408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.431432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.492 [2024-07-15 20:36:48.431448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.431470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.492 [2024-07-15 20:36:48.431486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.431508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.492 [2024-07-15 20:36:48.431531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.431569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.492 [2024-07-15 20:36:48.431599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.431639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.492 [2024-07-15 20:36:48.431668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.431706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.492 [2024-07-15 20:36:48.431728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.431758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.492 [2024-07-15 20:36:48.431787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.431827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.492 [2024-07-15 20:36:48.431854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.431895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.492 [2024-07-15 20:36:48.431913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.431936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.431952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.431973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.431999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.432024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.432044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.432081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.432112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.432153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.432177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.432201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.432217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.432253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.432282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.432323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.432350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.432375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.432391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.432413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.432429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.432451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.432477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.432515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.432545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.432596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.432626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.432666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.432706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.432765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.432790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.432813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.432829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.432851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.432880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.432906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.432922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.432944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.432969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.433007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.433038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.433078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.433099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.433122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.433138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.433174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.433205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.433244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.433273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.433312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.433339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.433364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.433381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:01.492 [2024-07-15 20:36:48.433413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.492 [2024-07-15 20:36:48.433432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.433470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.493 [2024-07-15 20:36:48.433500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.434157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.493 [2024-07-15 20:36:48.434198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.434241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.493 [2024-07-15 20:36:48.434272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.434326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.493 [2024-07-15 20:36:48.434355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.434393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.493 [2024-07-15 20:36:48.434422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.434460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.493 [2024-07-15 20:36:48.434483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.434506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.493 [2024-07-15 20:36:48.434522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.434543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.493 [2024-07-15 20:36:48.434559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.434591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.493 [2024-07-15 20:36:48.434621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.434660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.493 [2024-07-15 20:36:48.434683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.434718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.493 [2024-07-15 20:36:48.434737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.434768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.493 [2024-07-15 20:36:48.434818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.434859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.493 [2024-07-15 20:36:48.434898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.434925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.493 [2024-07-15 20:36:48.434943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.434966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.493 [2024-07-15 20:36:48.434994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.435031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.493 [2024-07-15 20:36:48.435059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.435097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.493 [2024-07-15 20:36:48.435129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.435169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.493 [2024-07-15 20:36:48.435198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.435238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.493 [2024-07-15 20:36:48.435265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.435301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.493 [2024-07-15 20:36:48.435319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.435341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.493 [2024-07-15 20:36:48.443308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.443364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.493 [2024-07-15 20:36:48.443385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.443409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.493 [2024-07-15 20:36:48.443429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.443467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.493 [2024-07-15 20:36:48.443500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.443561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.493 [2024-07-15 20:36:48.443594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.443621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.493 [2024-07-15 20:36:48.443639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.443674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.493 [2024-07-15 20:36:48.443703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.443743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.493 [2024-07-15 20:36:48.443773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.443804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.493 [2024-07-15 20:36:48.443821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.443843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.493 [2024-07-15 20:36:48.443883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:01.493 [2024-07-15 20:36:48.443929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.443959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.443988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.444005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.444027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.444043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.444077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.444106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.444146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.444177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.444216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.444246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.444292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.444310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.444333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.444349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.444371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.444387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.444419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.444446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.444486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.444517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.444555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.444586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.444627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.444656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.444689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.444723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.444747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.444772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.444798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.444827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.444884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.444912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.444937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.444954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.444980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.445023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.445065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.445094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.445134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.445163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.445188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.445205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.445227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.445242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.445265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.445288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.445327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.445358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.445396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.445427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.445468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.445498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.445533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.445561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.445601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.445623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.445646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.445662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.445684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.445709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.445743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.445770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.445796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.445823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.445861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.445911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.445951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.445981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.446022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.446050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.446075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.446091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.446113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.446129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.446151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.446168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.446204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.446234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.446274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.446302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:01.494 [2024-07-15 20:36:48.446342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.494 [2024-07-15 20:36:48.446373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.446412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.446436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.446470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.446487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.446509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.446532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.446570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.446600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.446640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.446670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.446710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.446740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.446767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.446783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.446806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.446821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.446848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.446897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.446941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.446965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.447963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.448008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.448055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.448088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.448127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.448147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.448182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.448199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.448232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.495 [2024-07-15 20:36:48.448262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.448302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.495 [2024-07-15 20:36:48.448326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.448350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.495 [2024-07-15 20:36:48.448367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.448396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.495 [2024-07-15 20:36:48.448425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.448458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.495 [2024-07-15 20:36:48.448487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.448525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.495 [2024-07-15 20:36:48.448555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.448591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.448608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.448630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.448645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.448667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.448688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.448742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.448773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.448812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.448841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.448901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.448947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.448984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.449002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.449024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.449051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.449073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.495 [2024-07-15 20:36:48.449089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.449111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.495 [2024-07-15 20:36:48.449131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.449170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.495 [2024-07-15 20:36:48.449199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.449239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.495 [2024-07-15 20:36:48.449261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.449284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.495 [2024-07-15 20:36:48.449302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.449340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.495 [2024-07-15 20:36:48.449370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.449410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.495 [2024-07-15 20:36:48.449430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.449452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.495 [2024-07-15 20:36:48.449468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.449493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.495 [2024-07-15 20:36:48.449521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.449559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:9376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.495 [2024-07-15 20:36:48.449603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.449644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.495 [2024-07-15 20:36:48.449674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.449715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.495 [2024-07-15 20:36:48.449744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.449783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.495 [2024-07-15 20:36:48.449812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:01.495 [2024-07-15 20:36:48.449842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.495 [2024-07-15 20:36:48.449859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.449897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.449914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.449936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.449952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.449985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.450014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.450052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.450082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.450122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.450151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.450190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.450217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.450241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.450258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.450281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.450306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.450358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.450390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.450430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.450460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.450499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.450527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.450552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.450568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.450590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.450606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.450628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.450644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.451318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.451361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.451409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.451441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.451481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.451511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.451547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.451565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.451586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.451602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.451629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.451659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.451715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.451746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.451785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.451816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.451856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.451909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.451951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.451973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.451997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.452013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.452035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.452054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.452090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.452120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.452158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.452180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.452202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.452224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.452262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.452293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.452331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.452361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.452399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.496 [2024-07-15 20:36:48.452430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.452465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.496 [2024-07-15 20:36:48.452497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:01.496 [2024-07-15 20:36:48.452521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.452537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.452560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.452585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.452623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.452654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.452705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.452736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.452775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.452806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.452844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.452862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.452900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.452917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.452939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.452955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.452977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.453001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.453038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.453068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.453108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.453137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.453177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.453207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.453246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.453264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.453286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.453301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.453328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.453358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.453398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.453428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.453465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.453496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.453534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.453563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.453590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.453607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.453629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.453645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.453680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.453709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.453750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.453774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.453798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.453818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.453857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.453908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.453949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.453966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.453988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.454010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.454049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.454079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.454116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.454147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.454185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.454210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.454234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.454250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.454272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.454299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.454340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.454369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.454407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.454437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.454476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.454499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.454522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.454538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.454565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.454594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.454635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.454670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.454695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.454711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.454740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.454770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.454811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.454839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.454896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.454930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.454956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.497 [2024-07-15 20:36:48.454973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:01.497 [2024-07-15 20:36:48.454995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.455020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.455058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.455089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.455127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.455157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.455197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.455225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.455261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.455292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.455328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.455346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.455368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.455398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.455438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.455467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.455504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.455523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.455545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.455561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.455590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.455619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.455657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.455687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.455726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.455755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.455795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.455822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.455846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.455862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.455905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.455923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.455945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.455960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.455982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.455998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.456034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.456062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.456117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.456148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.457149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.457188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.457240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.457274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.457313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.457344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.457384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.457414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.457450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.457471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.457508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.498 [2024-07-15 20:36:48.457537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.457563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.498 [2024-07-15 20:36:48.457585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.457622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.498 [2024-07-15 20:36:48.457643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.457682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.498 [2024-07-15 20:36:48.457712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.457753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.498 [2024-07-15 20:36:48.457783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.457822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.498 [2024-07-15 20:36:48.457852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.457914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.457934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.457956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.457972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.457994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.458011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.458046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.458075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.458117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.458146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.458185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.458217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.458256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.458285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.458323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.458352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.458380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.498 [2024-07-15 20:36:48.458396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.458418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.498 [2024-07-15 20:36:48.458433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.458455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.498 [2024-07-15 20:36:48.458478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:01.498 [2024-07-15 20:36:48.458516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.458546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.458586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.458630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.458672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.458702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.458737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.458755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.458777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.458793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.458815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.458830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.458863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.458913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.458954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.458981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.459006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.459022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.459051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.459082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.459122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.459146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.459170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.459186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.459209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.459227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.459263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.459306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.459348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.459378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.459418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.459447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.459473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.459489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.459511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.459527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.459559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.459588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.459627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.459651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.459675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.459693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.459729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.459759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.459797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.459821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.459855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.459906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.460176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.460217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.460293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.460329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.460378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.460396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.460421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.460437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.460463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.460479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.460504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.460522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.460563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.460594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.460637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.460666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.460721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.460756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.460798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.460818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.460844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.460860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.460903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.460920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.460959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.460988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.461032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.461061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.461119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.461157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.461195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.461212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.461238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.461254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.461279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.499 [2024-07-15 20:36:48.461295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:01.499 [2024-07-15 20:36:48.461334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.500 [2024-07-15 20:36:48.461363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.461407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.461438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.461481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.461512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.461557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.461578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.461604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.461621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.461646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.461662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.461687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.461713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.461756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.461786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.461828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.461885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.461929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.461948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.461988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.462017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.462061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.462091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.462121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.462138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.462163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.462180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.462208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.462236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.462279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.462309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.462352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.462382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.462423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.462443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.462469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.462484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.462510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.462525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.462562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.462596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.462647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.462677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.462720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.462745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.462772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.462788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.462819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.462835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.462861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.462896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.462924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.462940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.462966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.462982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.463008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.463024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.463049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.463065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.463090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.463106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.463132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.463148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.463173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.463197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.463255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.463282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.463326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.463356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:01.500 [2024-07-15 20:36:48.463395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.500 [2024-07-15 20:36:48.463412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.463438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.463454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.463486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.463516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.463560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.463590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.463633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.463663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.463705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.463724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.463751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.463767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.463794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.463810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.463850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.463899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.463945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.463976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.464041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.464068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.464096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.464113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.464139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.464163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.464206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.464237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.464275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.464292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.464323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.464353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.464399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.464430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.464475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.464502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.464530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.464547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.464576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.464605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.464649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.464678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.464733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.464763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.464808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.464854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.464918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.464944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.464972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.464989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.465015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.465032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.465073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.465103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.465150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.465172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:36:48.465431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:36:48.465458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:37:01.874501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:37:01.874549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:37:01.874577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:37:01.874593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:37:01.874612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:37:01.874626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:37:01.874641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:37:01.874655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:37:01.874670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:37:01.874684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:37:01.874700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:37:01.874713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:37:01.874752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:37:01.874767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:37:01.874782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:37:01.874796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:37:01.874812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:37:01.874825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:37:01.874841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:37:01.874855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:37:01.874886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:37:01.874904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:37:01.874920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:37:01.874934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:37:01.874949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:37:01.874963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:37:01.874979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.501 [2024-07-15 20:37:01.874992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.501 [2024-07-15 20:37:01.875008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.875021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.875051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.502 [2024-07-15 20:37:01.875081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.502 [2024-07-15 20:37:01.875112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.502 [2024-07-15 20:37:01.875151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.502 [2024-07-15 20:37:01.875182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.502 [2024-07-15 20:37:01.875212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.502 [2024-07-15 20:37:01.875243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.502 [2024-07-15 20:37:01.875272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.502 [2024-07-15 20:37:01.875302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.502 [2024-07-15 20:37:01.875332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.502 [2024-07-15 20:37:01.875361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.502 [2024-07-15 20:37:01.875391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.502 [2024-07-15 20:37:01.875421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.502 [2024-07-15 20:37:01.875450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.502 [2024-07-15 20:37:01.875480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.502 [2024-07-15 20:37:01.875510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.875548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.875579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.875609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.875639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.875669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.875698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.875728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.875758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.875788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.875826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.875857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.875901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.875938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.875969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.875985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.875999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.876015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.876029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.876045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.876059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.876075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.876090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.876106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.876120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.876135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.876149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.876165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.876179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.876194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.876208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.876224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.876239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.876255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.876269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.876285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.876298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.876316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.502 [2024-07-15 20:37:01.876336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.502 [2024-07-15 20:37:01.876354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.876368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.876384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.876398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.876414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.876428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.876444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.876458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.876474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.876488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.876504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.876518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.876534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.876548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.876564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.876579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.876595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.876609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.876625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.876640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.876656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.876670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.876686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.876714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.876741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.876756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.876772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.876787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.876805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.876819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.876837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.876852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.876878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.876895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.876911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.876925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.876941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.876955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.876971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.876985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.503 [2024-07-15 20:37:01.877676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.503 [2024-07-15 20:37:01.877690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.877706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.504 [2024-07-15 20:37:01.877720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.877737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.504 [2024-07-15 20:37:01.877751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.877766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.504 [2024-07-15 20:37:01.877780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.877796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.504 [2024-07-15 20:37:01.877810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.877828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.504 [2024-07-15 20:37:01.877842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.877858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.504 [2024-07-15 20:37:01.877883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.877901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.504 [2024-07-15 20:37:01.877922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.877939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.504 [2024-07-15 20:37:01.877953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.877968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.504 [2024-07-15 20:37:01.877982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.877998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.504 [2024-07-15 20:37:01.878013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:01.504 [2024-07-15 20:37:01.878042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.504 [2024-07-15 20:37:01.878072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.504 [2024-07-15 20:37:01.878102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.504 [2024-07-15 20:37:01.878131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.504 [2024-07-15 20:37:01.878161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.504 [2024-07-15 20:37:01.878191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.504 [2024-07-15 20:37:01.878224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.504 [2024-07-15 20:37:01.878254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.504 [2024-07-15 20:37:01.878284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.504 [2024-07-15 20:37:01.878319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.504 [2024-07-15 20:37:01.878351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.504 [2024-07-15 20:37:01.878380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.504 [2024-07-15 20:37:01.878410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.504 [2024-07-15 20:37:01.878440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.504 [2024-07-15 20:37:01.878470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.504 [2024-07-15 20:37:01.878500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:01.504 [2024-07-15 20:37:01.878529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:01.504 [2024-07-15 20:37:01.878577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:01.504 [2024-07-15 20:37:01.878589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45728 len:8 PRP1 0x0 PRP2 0x0 00:21:01.504 [2024-07-15 20:37:01.878603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878651] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1023500 was disconnected and freed. reset controller. 00:21:01.504 [2024-07-15 20:37:01.878744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.504 [2024-07-15 20:37:01.878769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.504 [2024-07-15 20:37:01.878799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.504 [2024-07-15 20:37:01.878827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:01.504 [2024-07-15 20:37:01.878883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:01.504 [2024-07-15 20:37:01.878900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ef4d0 is same with the state(5) to be set 00:21:01.504 [2024-07-15 20:37:01.880207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.504 [2024-07-15 20:37:01.880250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ef4d0 (9): Bad file descriptor 00:21:01.504 [2024-07-15 20:37:01.880365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:01.504 [2024-07-15 20:37:01.880397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ef4d0 with addr=10.0.0.2, port=4421 00:21:01.504 [2024-07-15 20:37:01.880414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ef4d0 is same with the state(5) to be set 00:21:01.504 [2024-07-15 20:37:01.880675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ef4d0 (9): Bad file descriptor 00:21:01.504 [2024-07-15 20:37:01.880756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.505 [2024-07-15 20:37:01.880779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:01.505 [2024-07-15 20:37:01.880795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.505 [2024-07-15 20:37:01.880822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:01.505 [2024-07-15 20:37:01.880837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:01.505 [2024-07-15 20:37:11.976774] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:01.505 Received shutdown signal, test time was about 55.978259 seconds 00:21:01.505 00:21:01.505 Latency(us) 00:21:01.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.505 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:01.505 Verification LBA range: start 0x0 length 0x4000 00:21:01.505 Nvme0n1 : 55.98 7157.40 27.96 0.00 0.00 17848.89 1891.61 7076934.75 00:21:01.505 =================================================================================================================== 00:21:01.505 Total : 7157.40 27.96 0.00 0.00 17848.89 1891.61 7076934.75 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:01.505 rmmod nvme_tcp 00:21:01.505 rmmod nvme_fabrics 00:21:01.505 rmmod nvme_keyring 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 94634 ']' 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 94634 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 94634 ']' 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 94634 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 94634 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 94634' 00:21:01.505 killing process with pid 94634 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 94634 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 94634 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:01.505 00:21:01.505 real 1m1.679s 00:21:01.505 user 2m56.009s 00:21:01.505 sys 0m13.461s 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:01.505 20:37:22 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:01.505 ************************************ 00:21:01.505 END TEST nvmf_host_multipath 00:21:01.505 ************************************ 00:21:01.765 20:37:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:01.765 20:37:22 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:01.765 20:37:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:01.765 20:37:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:01.765 20:37:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:01.765 ************************************ 00:21:01.765 START TEST nvmf_timeout 00:21:01.765 ************************************ 00:21:01.765 20:37:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:21:01.765 * Looking for test storage... 00:21:01.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:01.765 Cannot find device "nvmf_tgt_br" 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:01.765 Cannot find device "nvmf_tgt_br2" 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:01.765 Cannot find device "nvmf_tgt_br" 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:21:01.765 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:01.765 Cannot find device "nvmf_tgt_br2" 00:21:01.766 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:21:01.766 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:01.766 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:01.766 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:01.766 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:01.766 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:21:01.766 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:01.766 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:01.766 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:21:01.766 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:01.766 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:01.766 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:01.766 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:01.766 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:01.766 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:01.766 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:01.766 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:01.766 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:01.766 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:01.766 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:01.766 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:02.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:21:02.023 00:21:02.023 --- 10.0.0.2 ping statistics --- 00:21:02.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.023 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:02.023 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:02.023 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:21:02.023 00:21:02.023 --- 10.0.0.3 ping statistics --- 00:21:02.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.023 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:02.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:21:02.023 00:21:02.023 --- 10.0.0.1 ping statistics --- 00:21:02.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.023 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=95989 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 95989 00:21:02.023 20:37:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 95989 ']' 00:21:02.024 20:37:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.024 20:37:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.024 20:37:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.024 20:37:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.024 20:37:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:02.024 [2024-07-15 20:37:23.468516] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:21:02.024 [2024-07-15 20:37:23.468652] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.282 [2024-07-15 20:37:23.610196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:02.282 [2024-07-15 20:37:23.680677] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.282 [2024-07-15 20:37:23.680991] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.282 [2024-07-15 20:37:23.681166] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.282 [2024-07-15 20:37:23.681346] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.282 [2024-07-15 20:37:23.681362] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.282 [2024-07-15 20:37:23.681501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.282 [2024-07-15 20:37:23.681539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.282 20:37:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:02.282 20:37:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:02.282 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:02.282 20:37:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:02.282 20:37:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:02.541 20:37:23 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.541 20:37:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:02.541 20:37:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:02.541 [2024-07-15 20:37:24.025077] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.798 20:37:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:02.798 Malloc0 00:21:03.056 20:37:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:03.314 20:37:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:03.571 20:37:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:03.829 [2024-07-15 20:37:25.087570] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.829 20:37:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96068 00:21:03.829 20:37:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:03.829 20:37:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96068 /var/tmp/bdevperf.sock 00:21:03.829 20:37:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96068 ']' 00:21:03.829 20:37:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.829 20:37:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:03.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.829 20:37:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.829 20:37:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:03.829 20:37:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:03.829 [2024-07-15 20:37:25.161199] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:21:03.829 [2024-07-15 20:37:25.161299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96068 ] 00:21:03.829 [2024-07-15 20:37:25.298070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.088 [2024-07-15 20:37:25.366133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.088 20:37:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:04.088 20:37:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:04.088 20:37:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:04.346 20:37:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:04.604 NVMe0n1 00:21:04.604 20:37:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96101 00:21:04.604 20:37:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:04.604 20:37:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:21:04.862 Running I/O for 10 seconds... 00:21:05.798 20:37:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:06.059 [2024-07-15 20:37:27.311127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311184] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311196] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311205] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311222] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311239] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311256] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311265] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311281] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311298] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311311] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311320] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311328] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311336] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311353] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311405] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311413] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311430] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311438] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311454] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311462] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311496] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311504] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311520] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311528] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311545] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311561] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311569] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311581] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311590] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311623] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311632] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311640] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311648] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311656] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311664] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311672] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311680] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311697] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.311705] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b6900 is same with the state(5) to be set 00:21:06.059 [2024-07-15 20:37:27.313447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.059 [2024-07-15 20:37:27.313490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.059 [2024-07-15 20:37:27.313505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.059 [2024-07-15 20:37:27.313515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.059 [2024-07-15 20:37:27.313536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.059 [2024-07-15 20:37:27.313546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.059 [2024-07-15 20:37:27.313557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:06.060 [2024-07-15 20:37:27.313566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.313577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fe240 is same with the state(5) to be set 00:21:06.060 [2024-07-15 20:37:27.313624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.313640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.313659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.313669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.313682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.313691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.313703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.313713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.313725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.313734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.313746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.313757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.314063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.314091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.314106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.314125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.314137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.314149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.314161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.314170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.314182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.314192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.314204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.314214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.314226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.314236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.314248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.314258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.314380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.314396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.314408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.314417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.314429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.314691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.314709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.314720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.314732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.314742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.314755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.314764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.314777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.314786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.314798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.314808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.315249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.315276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.315289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.315299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.315312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.315322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.315334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.315344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.315356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.315366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.315378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.315387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.315399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.315409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.315688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.315821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.315838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.316179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.316199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.316210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.316221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.316232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.316244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.316254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.316546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.316560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.316573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.316582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.316594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.316604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.316616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.316626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.316638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.316648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.316660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.316790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.317061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.060 [2024-07-15 20:37:27.317342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.060 [2024-07-15 20:37:27.317360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.317430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.317443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.317453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.317465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.317476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.317488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.317498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.317859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.317892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.317910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.317921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.317934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.317945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.317957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.317967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.317979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.318355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.318379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.318391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.318404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.318414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.318425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.318435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.318447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.318457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.318469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.318479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.318490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.318500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.318911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.319047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.319285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.319306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.319320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.319332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.319344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.319355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.319367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.319377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.319389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.319399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.319410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.319551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.319832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.320131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.320149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.320160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.320172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.320182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.320194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.320204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.320216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.320226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.320238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.320377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.320622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.320636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.320648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.320658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.320670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.320680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.320692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.320702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.320729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.320740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.320752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.320895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.321184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.321208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.321354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.321474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.321491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.321501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.321515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.321527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.321539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.321549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.321561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.321571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.321583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.321689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.321709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.321720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.321857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.061 [2024-07-15 20:37:27.322118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.061 [2024-07-15 20:37:27.322134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.062 [2024-07-15 20:37:27.322146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.322158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.062 [2024-07-15 20:37:27.322168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.322181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.062 [2024-07-15 20:37:27.322190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.322202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.062 [2024-07-15 20:37:27.322212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.322224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.062 [2024-07-15 20:37:27.322573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.322603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.062 [2024-07-15 20:37:27.322615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.322628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.062 [2024-07-15 20:37:27.322638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.322650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.062 [2024-07-15 20:37:27.322660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.322671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.062 [2024-07-15 20:37:27.322681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.322693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:06.062 [2024-07-15 20:37:27.322703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.322716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.323078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.323107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.323118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.323130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.323140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.323152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.323163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.323175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.323185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.323197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.323207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.323218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.323566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.323595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.323606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.323618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.323628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.323640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.323650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.323662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.323672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.323684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.323699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.323711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.323721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.323833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.323848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.323861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.324763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.324802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.324947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.325166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.325181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.325193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.325204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.325216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.325226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.325238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.325407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.325532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.325545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.325557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.325815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.325842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.325854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.325880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.325899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.325915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.325925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.325937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.326186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.326200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.326211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.326223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.326234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.326246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.326256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.326267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.326504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.326530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.326542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.326554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.326564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.326576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:06.062 [2024-07-15 20:37:27.326586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.062 [2024-07-15 20:37:27.326831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:06.063 [2024-07-15 20:37:27.326855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:06.063 [2024-07-15 20:37:27.326879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76584 len:8 PRP1 0x0 PRP2 0x0 00:21:06.063 [2024-07-15 20:37:27.326899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:06.063 [2024-07-15 20:37:27.326951] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x236b8d0 was disconnected and freed. reset controller. 00:21:06.063 [2024-07-15 20:37:27.327222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fe240 (9): Bad file descriptor 00:21:06.063 [2024-07-15 20:37:27.327713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:06.063 [2024-07-15 20:37:27.328030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:06.063 [2024-07-15 20:37:27.328071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22fe240 with addr=10.0.0.2, port=4420 00:21:06.063 [2024-07-15 20:37:27.328085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fe240 is same with the state(5) to be set 00:21:06.063 [2024-07-15 20:37:27.328108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fe240 (9): Bad file descriptor 00:21:06.063 [2024-07-15 20:37:27.328126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:06.063 [2024-07-15 20:37:27.328136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:06.063 [2024-07-15 20:37:27.328441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:06.063 [2024-07-15 20:37:27.328472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:06.063 [2024-07-15 20:37:27.328485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:06.063 20:37:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:21:07.969 [2024-07-15 20:37:29.328641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:07.969 [2024-07-15 20:37:29.328722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22fe240 with addr=10.0.0.2, port=4420 00:21:07.969 [2024-07-15 20:37:29.328741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fe240 is same with the state(5) to be set 00:21:07.969 [2024-07-15 20:37:29.328770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fe240 (9): Bad file descriptor 00:21:07.969 [2024-07-15 20:37:29.328790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:07.969 [2024-07-15 20:37:29.328801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:07.969 [2024-07-15 20:37:29.328814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:07.969 [2024-07-15 20:37:29.328843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:07.969 [2024-07-15 20:37:29.328856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:07.969 20:37:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:21:07.969 20:37:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:07.969 20:37:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:08.227 20:37:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:21:08.227 20:37:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:21:08.227 20:37:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:08.227 20:37:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:08.485 20:37:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:21:08.485 20:37:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:21:09.859 [2024-07-15 20:37:31.329026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:09.859 [2024-07-15 20:37:31.329105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22fe240 with addr=10.0.0.2, port=4420 00:21:09.859 [2024-07-15 20:37:31.329123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22fe240 is same with the state(5) to be set 00:21:09.859 [2024-07-15 20:37:31.329152] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fe240 (9): Bad file descriptor 00:21:09.859 [2024-07-15 20:37:31.329173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:09.859 [2024-07-15 20:37:31.329184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:09.859 [2024-07-15 20:37:31.329196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:09.859 [2024-07-15 20:37:31.329227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:09.859 [2024-07-15 20:37:31.329240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:12.380 [2024-07-15 20:37:33.329274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:12.380 [2024-07-15 20:37:33.329349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:12.380 [2024-07-15 20:37:33.329364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:12.380 [2024-07-15 20:37:33.329375] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:12.380 [2024-07-15 20:37:33.329407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:12.944 00:21:12.944 Latency(us) 00:21:12.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.944 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:12.944 Verification LBA range: start 0x0 length 0x4000 00:21:12.944 NVMe0n1 : 8.11 1164.98 4.55 15.79 0.00 108468.75 2219.29 7046430.72 00:21:12.944 =================================================================================================================== 00:21:12.944 Total : 1164.98 4.55 15.79 0.00 108468.75 2219.29 7046430.72 00:21:12.944 0 00:21:13.508 20:37:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:21:13.508 20:37:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:13.508 20:37:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:21:13.773 20:37:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:21:13.773 20:37:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:21:13.773 20:37:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:21:13.773 20:37:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:21:14.039 20:37:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:21:14.039 20:37:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 96101 00:21:14.039 20:37:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96068 00:21:14.039 20:37:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96068 ']' 00:21:14.039 20:37:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96068 00:21:14.039 20:37:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:14.039 20:37:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:14.039 20:37:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96068 00:21:14.297 killing process with pid 96068 00:21:14.297 Received shutdown signal, test time was about 9.319913 seconds 00:21:14.297 00:21:14.297 Latency(us) 00:21:14.297 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.297 =================================================================================================================== 00:21:14.297 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:14.297 20:37:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:14.297 20:37:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:14.297 20:37:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96068' 00:21:14.297 20:37:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96068 00:21:14.297 20:37:35 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96068 00:21:14.297 20:37:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:14.554 [2024-07-15 20:37:36.023912] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.554 20:37:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96259 00:21:14.554 20:37:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:21:14.554 20:37:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96259 /var/tmp/bdevperf.sock 00:21:14.554 20:37:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96259 ']' 00:21:14.554 20:37:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.554 20:37:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:14.554 20:37:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.554 20:37:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:14.554 20:37:36 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:14.813 [2024-07-15 20:37:36.114672] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:21:14.813 [2024-07-15 20:37:36.114817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96259 ] 00:21:14.813 [2024-07-15 20:37:36.258152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.071 [2024-07-15 20:37:36.320270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.636 20:37:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:15.636 20:37:37 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:15.636 20:37:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:15.893 20:37:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:21:16.458 NVMe0n1 00:21:16.458 20:37:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96302 00:21:16.458 20:37:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:16.458 20:37:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:21:16.458 Running I/O for 10 seconds... 00:21:17.389 20:37:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:17.648 [2024-07-15 20:37:38.937598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937652] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937664] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937672] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937683] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937691] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937699] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937716] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937733] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937741] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937749] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937758] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937799] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937816] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937824] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937832] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937840] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937856] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937864] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937888] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937897] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937905] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937922] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.937966] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b4b50 is same with the state(5) to be set 00:21:17.648 [2024-07-15 20:37:38.938841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.648 [2024-07-15 20:37:38.938894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.648 [2024-07-15 20:37:38.938918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.648 [2024-07-15 20:37:38.938929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.648 [2024-07-15 20:37:38.938940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.648 [2024-07-15 20:37:38.938951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.648 [2024-07-15 20:37:38.938962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.938971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.938982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.938991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.939003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.939012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.939131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.939148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.939160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.939169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.939180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.939428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.939455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.939465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.939478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.939491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.939502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.939511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.939522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.939531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.939542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.939551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.939670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.939686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.939699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.939709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.939720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.939730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.939863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.939891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.939905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.940162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.940177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.940187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.940198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.940208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.940220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.649 [2024-07-15 20:37:38.940229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.940240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.940250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.940525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.940624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.940638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.940647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.940659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.940668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.940679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.940689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.940699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.940708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.940752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.940853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.940882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.940895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.940906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.940915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.941001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.941017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.941029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.941043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.941054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.941280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.941309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.941319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.941331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.941340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.941351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.941361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.941372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.941381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.941392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.941401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.941633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.941645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.941657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.941666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.941677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.941687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.941699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.941708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.941719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.941958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.649 [2024-07-15 20:37:38.941986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.649 [2024-07-15 20:37:38.941998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.942009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.942018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.942030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.942038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.942049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.942059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.942070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.942304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.942333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.942344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.942356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.942365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.942376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.942385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.942397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.942406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.942418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.942640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.942656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.942665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.942678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.942687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.942698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.942707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.942718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.942728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.942978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.943000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.943013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.943109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.943128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.943139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.943150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.943159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.943418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.943433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.943445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.943454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.943465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.943477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.943488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.943502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.943755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.943830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.943848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.943857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.943881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.943893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.943905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.943914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.944181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.944194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.944206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.944216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.944227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.944236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.944247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.944256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.944267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.944275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.944685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.944708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.650 [2024-07-15 20:37:38.944731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.650 [2024-07-15 20:37:38.944740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.944751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.944761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.944771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.944781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.944792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.944801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.944812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.944821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.944832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.945113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.945138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.945148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.945160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.945169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.945180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.945190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.945201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.945210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.945220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.945229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.945383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.945488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.945502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.945512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.945523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.945681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.945784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.945795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.945806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.945815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.945827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.946216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.946239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.946249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.946260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.946269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.946281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.946290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.946493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.946514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.946527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.946537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.946548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.946557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.946568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.946577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.946588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.946597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.946608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.946834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.946850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.946861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.946885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.946896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.947150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.947164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.947175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.947185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.947196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.947205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.947453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.947474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.947487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.947497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.947508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.947517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.947528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.651 [2024-07-15 20:37:38.947537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.947548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.651 [2024-07-15 20:37:38.947699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.947771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.651 [2024-07-15 20:37:38.947782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.947795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.651 [2024-07-15 20:37:38.947804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.947815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.651 [2024-07-15 20:37:38.947825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.947835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.651 [2024-07-15 20:37:38.947844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.947855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.651 [2024-07-15 20:37:38.948092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.948115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.651 [2024-07-15 20:37:38.948125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.948136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.651 [2024-07-15 20:37:38.948146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.651 [2024-07-15 20:37:38.948157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.652 [2024-07-15 20:37:38.948166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.652 [2024-07-15 20:37:38.948177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.652 [2024-07-15 20:37:38.948278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.652 [2024-07-15 20:37:38.948297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.652 [2024-07-15 20:37:38.948313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.652 [2024-07-15 20:37:38.948594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.652 [2024-07-15 20:37:38.948615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.652 [2024-07-15 20:37:38.948627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.652 [2024-07-15 20:37:38.948637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.652 [2024-07-15 20:37:38.948648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.652 [2024-07-15 20:37:38.948657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.652 [2024-07-15 20:37:38.948668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.652 [2024-07-15 20:37:38.948677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.652 [2024-07-15 20:37:38.948688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:17.652 [2024-07-15 20:37:38.948955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.652 [2024-07-15 20:37:38.948998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:17.652 [2024-07-15 20:37:38.949009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:17.652 [2024-07-15 20:37:38.949018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78432 len:8 PRP1 0x0 PRP2 0x0 00:21:17.652 [2024-07-15 20:37:38.949027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.652 [2024-07-15 20:37:38.949268] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e878d0 was disconnected and freed. reset controller. 00:21:17.652 [2024-07-15 20:37:38.949357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.652 [2024-07-15 20:37:38.949490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.652 [2024-07-15 20:37:38.949585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.652 [2024-07-15 20:37:38.949598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.652 [2024-07-15 20:37:38.949608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.652 [2024-07-15 20:37:38.949617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.652 [2024-07-15 20:37:38.949626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.652 [2024-07-15 20:37:38.949636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.652 [2024-07-15 20:37:38.949897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a240 is same with the state(5) to be set 00:21:17.652 [2024-07-15 20:37:38.950308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:17.652 [2024-07-15 20:37:38.950345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1a240 (9): Bad file descriptor 00:21:17.652 [2024-07-15 20:37:38.950662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:17.652 [2024-07-15 20:37:38.950696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e1a240 with addr=10.0.0.2, port=4420 00:21:17.652 [2024-07-15 20:37:38.950709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a240 is same with the state(5) to be set 00:21:17.652 [2024-07-15 20:37:38.950729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1a240 (9): Bad file descriptor 00:21:17.652 [2024-07-15 20:37:38.950746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:17.652 [2024-07-15 20:37:38.950757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:17.652 [2024-07-15 20:37:38.950907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:17.652 [2024-07-15 20:37:38.951011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:17.652 [2024-07-15 20:37:38.951025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:17.652 20:37:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:21:18.585 [2024-07-15 20:37:39.951157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:18.585 [2024-07-15 20:37:39.951226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e1a240 with addr=10.0.0.2, port=4420 00:21:18.585 [2024-07-15 20:37:39.951243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a240 is same with the state(5) to be set 00:21:18.585 [2024-07-15 20:37:39.951270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1a240 (9): Bad file descriptor 00:21:18.585 [2024-07-15 20:37:39.951290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:18.585 [2024-07-15 20:37:39.951301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:18.585 [2024-07-15 20:37:39.951312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:18.585 [2024-07-15 20:37:39.951341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:18.585 [2024-07-15 20:37:39.951354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:18.585 20:37:39 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:18.842 [2024-07-15 20:37:40.293980] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.842 20:37:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 96302 00:21:19.821 [2024-07-15 20:37:40.963686] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:26.385 00:21:26.385 Latency(us) 00:21:26.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.385 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:26.385 Verification LBA range: start 0x0 length 0x4000 00:21:26.385 NVMe0n1 : 10.01 6023.96 23.53 0.00 0.00 21211.70 1869.27 3035150.89 00:21:26.385 =================================================================================================================== 00:21:26.385 Total : 6023.96 23.53 0.00 0.00 21211.70 1869.27 3035150.89 00:21:26.385 0 00:21:26.385 20:37:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96420 00:21:26.385 20:37:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:26.385 20:37:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:21:26.643 Running I/O for 10 seconds... 00:21:27.574 20:37:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:27.834 [2024-07-15 20:37:49.088186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088244] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088257] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088275] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088309] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088317] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088333] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088342] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088350] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088358] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088375] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088416] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088424] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088432] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.834 [2024-07-15 20:37:49.088440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088448] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088473] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088481] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088497] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088505] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088523] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088532] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088540] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088549] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088565] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088573] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088581] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088597] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088606] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088614] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088622] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088630] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088654] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.088662] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140d660 is same with the state(5) to be set 00:21:27.835 [2024-07-15 20:37:49.089446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.835 [2024-07-15 20:37:49.089490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.835 [2024-07-15 20:37:49.089515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.835 [2024-07-15 20:37:49.089526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.835 [2024-07-15 20:37:49.089537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.835 [2024-07-15 20:37:49.089547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.835 [2024-07-15 20:37:49.089558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.835 [2024-07-15 20:37:49.089567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.835 [2024-07-15 20:37:49.089578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.835 [2024-07-15 20:37:49.089833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.835 [2024-07-15 20:37:49.089848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.835 [2024-07-15 20:37:49.089858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.835 [2024-07-15 20:37:49.089884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.835 [2024-07-15 20:37:49.089896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.835 [2024-07-15 20:37:49.089907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.835 [2024-07-15 20:37:49.089917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.835 [2024-07-15 20:37:49.089928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.835 [2024-07-15 20:37:49.090186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.835 [2024-07-15 20:37:49.090214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.835 [2024-07-15 20:37:49.090225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.835 [2024-07-15 20:37:49.090237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.835 [2024-07-15 20:37:49.090246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.835 [2024-07-15 20:37:49.090257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.835 [2024-07-15 20:37:49.090267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.835 [2024-07-15 20:37:49.090279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.835 [2024-07-15 20:37:49.090288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.835 [2024-07-15 20:37:49.090299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.835 [2024-07-15 20:37:49.090418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.835 [2024-07-15 20:37:49.090436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.835 [2024-07-15 20:37:49.090446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.835 [2024-07-15 20:37:49.090458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.835 [2024-07-15 20:37:49.090467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.835 [2024-07-15 20:37:49.090713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.835 [2024-07-15 20:37:49.090732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.835 [2024-07-15 20:37:49.090745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.836 [2024-07-15 20:37:49.090754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.090766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.836 [2024-07-15 20:37:49.090775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.090786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.836 [2024-07-15 20:37:49.091077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.091109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.836 [2024-07-15 20:37:49.091120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.091132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.836 [2024-07-15 20:37:49.091145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.091164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.836 [2024-07-15 20:37:49.091176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.091188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.836 [2024-07-15 20:37:49.091426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.091454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.836 [2024-07-15 20:37:49.091466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.091478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.836 [2024-07-15 20:37:49.091491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.091509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.836 [2024-07-15 20:37:49.091519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.091531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.836 [2024-07-15 20:37:49.091676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.091697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.836 [2024-07-15 20:37:49.091708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.091720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.836 [2024-07-15 20:37:49.092011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.092043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.836 [2024-07-15 20:37:49.092055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.092066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.836 [2024-07-15 20:37:49.092075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.092087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.836 [2024-07-15 20:37:49.092097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.092355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.836 [2024-07-15 20:37:49.092378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.092396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.836 [2024-07-15 20:37:49.092411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.092424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.836 [2024-07-15 20:37:49.092434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.092446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.836 [2024-07-15 20:37:49.092693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.092710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.836 [2024-07-15 20:37:49.092737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.092753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.836 [2024-07-15 20:37:49.092762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.092773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.836 [2024-07-15 20:37:49.093011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.093038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.836 [2024-07-15 20:37:49.093049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.093060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.836 [2024-07-15 20:37:49.093072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.093089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.836 [2024-07-15 20:37:49.093103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.093115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.836 [2024-07-15 20:37:49.093442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.093478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.836 [2024-07-15 20:37:49.093491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.093503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.836 [2024-07-15 20:37:49.093514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.093525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.836 [2024-07-15 20:37:49.093534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.093546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.836 [2024-07-15 20:37:49.093885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.093905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.836 [2024-07-15 20:37:49.093915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.836 [2024-07-15 20:37:49.093927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.836 [2024-07-15 20:37:49.093936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.093947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.094201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.094221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.094237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.094249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.094259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.094270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.094502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.094530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.094540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.094552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.094566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.094584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.094594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.094605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.094846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.094863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.094891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.094912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.094924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.094935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.095177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.095208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.095225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.095240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.095250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.095261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.095270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.095282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.095521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.095536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.095546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.095558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.095574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.095592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.095602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.095841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.095864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.095898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.095911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.095923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.095932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.095943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.096188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.096208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.096224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.096236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.096245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.096258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.096501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.096515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.096524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.096536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.096551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.096564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.096730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.096821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.096832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.096843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.096853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.096881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.837 [2024-07-15 20:37:49.096897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.837 [2024-07-15 20:37:49.097431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.838 [2024-07-15 20:37:49.097459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.097479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.838 [2024-07-15 20:37:49.097491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.097503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.838 [2024-07-15 20:37:49.097512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.097523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.838 [2024-07-15 20:37:49.097533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.097814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.838 [2024-07-15 20:37:49.097964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.097982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.838 [2024-07-15 20:37:49.097992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.098004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.838 [2024-07-15 20:37:49.098012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.098024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.838 [2024-07-15 20:37:49.098034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.098327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.838 [2024-07-15 20:37:49.098346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.098359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.838 [2024-07-15 20:37:49.098368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.098380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.838 [2024-07-15 20:37:49.098389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.098657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.838 [2024-07-15 20:37:49.098686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.098704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.838 [2024-07-15 20:37:49.098714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.098727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.838 [2024-07-15 20:37:49.098736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.098747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.838 [2024-07-15 20:37:49.098757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.098901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.838 [2024-07-15 20:37:49.099026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.099044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.838 [2024-07-15 20:37:49.099060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.099073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.838 [2024-07-15 20:37:49.099083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.099095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.838 [2024-07-15 20:37:49.099336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.099365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.838 [2024-07-15 20:37:49.099381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.099397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.838 [2024-07-15 20:37:49.099408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.099420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.838 [2024-07-15 20:37:49.099429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.099441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.838 [2024-07-15 20:37:49.099694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.099715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.838 [2024-07-15 20:37:49.099730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.099742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.838 [2024-07-15 20:37:49.099751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.099763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.838 [2024-07-15 20:37:49.099773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.100310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.838 [2024-07-15 20:37:49.100338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.100356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.838 [2024-07-15 20:37:49.100367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.100379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.838 [2024-07-15 20:37:49.100388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.838 [2024-07-15 20:37:49.100399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.839 [2024-07-15 20:37:49.100408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.100541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.839 [2024-07-15 20:37:49.100833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.100858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.839 [2024-07-15 20:37:49.100881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.100895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.839 [2024-07-15 20:37:49.100905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.101227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.839 [2024-07-15 20:37:49.101273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.101293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.839 [2024-07-15 20:37:49.101303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.101315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.839 [2024-07-15 20:37:49.101323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.101335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.839 [2024-07-15 20:37:49.101598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.101633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.839 [2024-07-15 20:37:49.101649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.101662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.839 [2024-07-15 20:37:49.101672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.101683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.839 [2024-07-15 20:37:49.101692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.101704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.839 [2024-07-15 20:37:49.101990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.102013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.839 [2024-07-15 20:37:49.102024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.102036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.839 [2024-07-15 20:37:49.102045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.102056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.839 [2024-07-15 20:37:49.102065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.102321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.839 [2024-07-15 20:37:49.102338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.102356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.839 [2024-07-15 20:37:49.102367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.102653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.839 [2024-07-15 20:37:49.102681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.839 [2024-07-15 20:37:49.102695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72800 len:8 PRP1 0x0 PRP2 0x0 00:21:27.839 [2024-07-15 20:37:49.102706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.102986] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e89670 was disconnected and freed. reset controller. 00:21:27.839 [2024-07-15 20:37:49.103312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.839 [2024-07-15 20:37:49.103341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.103357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.839 [2024-07-15 20:37:49.103372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.103385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.839 [2024-07-15 20:37:49.103394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.103404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.839 [2024-07-15 20:37:49.103414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.839 [2024-07-15 20:37:49.103542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a240 is same with the state(5) to be set 00:21:27.839 [2024-07-15 20:37:49.104191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:27.839 [2024-07-15 20:37:49.104232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1a240 (9): Bad file descriptor 00:21:27.839 [2024-07-15 20:37:49.104567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:27.839 [2024-07-15 20:37:49.104609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e1a240 with addr=10.0.0.2, port=4420 00:21:27.839 [2024-07-15 20:37:49.104623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a240 is same with the state(5) to be set 00:21:27.839 [2024-07-15 20:37:49.104646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1a240 (9): Bad file descriptor 00:21:27.839 [2024-07-15 20:37:49.104664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:27.839 [2024-07-15 20:37:49.104674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:27.839 [2024-07-15 20:37:49.105052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:27.839 [2024-07-15 20:37:49.105093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:27.839 [2024-07-15 20:37:49.105107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:27.839 20:37:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:21:28.770 [2024-07-15 20:37:50.105296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:28.770 [2024-07-15 20:37:50.105388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e1a240 with addr=10.0.0.2, port=4420 00:21:28.770 [2024-07-15 20:37:50.105413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a240 is same with the state(5) to be set 00:21:28.770 [2024-07-15 20:37:50.105449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1a240 (9): Bad file descriptor 00:21:28.770 [2024-07-15 20:37:50.105476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:28.770 [2024-07-15 20:37:50.105491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:28.770 [2024-07-15 20:37:50.105508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:28.770 [2024-07-15 20:37:50.106006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:28.770 [2024-07-15 20:37:50.106050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:29.703 [2024-07-15 20:37:51.106238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:29.703 [2024-07-15 20:37:51.106323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e1a240 with addr=10.0.0.2, port=4420 00:21:29.703 [2024-07-15 20:37:51.106341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a240 is same with the state(5) to be set 00:21:29.703 [2024-07-15 20:37:51.106369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1a240 (9): Bad file descriptor 00:21:29.703 [2024-07-15 20:37:51.106389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:29.703 [2024-07-15 20:37:51.106398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:29.703 [2024-07-15 20:37:51.106409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:29.703 [2024-07-15 20:37:51.106437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:29.703 [2024-07-15 20:37:51.106449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:30.637 [2024-07-15 20:37:52.108992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:30.637 [2024-07-15 20:37:52.109075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e1a240 with addr=10.0.0.2, port=4420 00:21:30.637 [2024-07-15 20:37:52.109092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e1a240 is same with the state(5) to be set 00:21:30.637 [2024-07-15 20:37:52.109526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e1a240 (9): Bad file descriptor 00:21:30.637 [2024-07-15 20:37:52.109980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:30.637 [2024-07-15 20:37:52.110013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:30.637 [2024-07-15 20:37:52.110027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:30.637 [2024-07-15 20:37:52.114189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:30.637 [2024-07-15 20:37:52.114221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:30.637 20:37:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:30.895 [2024-07-15 20:37:52.380155] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.153 20:37:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 96420 00:21:31.719 [2024-07-15 20:37:53.155716] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:36.980 00:21:36.980 Latency(us) 00:21:36.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.980 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:36.980 Verification LBA range: start 0x0 length 0x4000 00:21:36.980 NVMe0n1 : 10.01 5167.97 20.19 3572.07 0.00 14617.36 636.74 3035150.89 00:21:36.980 =================================================================================================================== 00:21:36.980 Total : 5167.97 20.19 3572.07 0.00 14617.36 0.00 3035150.89 00:21:36.980 0 00:21:36.980 20:37:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96259 00:21:36.980 20:37:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96259 ']' 00:21:36.980 20:37:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96259 00:21:36.980 20:37:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:36.980 20:37:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:36.980 20:37:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96259 00:21:36.980 killing process with pid 96259 00:21:36.980 Received shutdown signal, test time was about 10.000000 seconds 00:21:36.980 00:21:36.980 Latency(us) 00:21:36.980 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.980 =================================================================================================================== 00:21:36.980 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:36.980 20:37:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:36.980 20:37:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:36.980 20:37:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96259' 00:21:36.980 20:37:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96259 00:21:36.980 20:37:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96259 00:21:36.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.980 20:37:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96541 00:21:36.980 20:37:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:21:36.980 20:37:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96541 /var/tmp/bdevperf.sock 00:21:36.980 20:37:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 96541 ']' 00:21:36.980 20:37:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.980 20:37:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.980 20:37:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.980 20:37:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.980 20:37:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:36.980 [2024-07-15 20:37:58.221157] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:21:36.980 [2024-07-15 20:37:58.221493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96541 ] 00:21:36.980 [2024-07-15 20:37:58.359540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.980 [2024-07-15 20:37:58.428749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.238 20:37:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.238 20:37:58 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:21:37.238 20:37:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96556 00:21:37.238 20:37:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96541 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:21:37.238 20:37:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:21:37.495 20:37:58 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:21:37.753 NVMe0n1 00:21:37.753 20:37:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96609 00:21:37.753 20:37:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:37.754 20:37:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:21:38.011 Running I/O for 10 seconds... 00:21:38.951 20:38:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:39.217 [2024-07-15 20:38:00.457036] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.217 [2024-07-15 20:38:00.457091] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.217 [2024-07-15 20:38:00.457108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.217 [2024-07-15 20:38:00.457123] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.217 [2024-07-15 20:38:00.457135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.217 [2024-07-15 20:38:00.457148] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.217 [2024-07-15 20:38:00.457160] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.217 [2024-07-15 20:38:00.457173] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.217 [2024-07-15 20:38:00.457187] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.217 [2024-07-15 20:38:00.457199] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.217 [2024-07-15 20:38:00.457212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.217 [2024-07-15 20:38:00.457225] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.217 [2024-07-15 20:38:00.457237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.217 [2024-07-15 20:38:00.457250] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.217 [2024-07-15 20:38:00.457263] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457291] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457304] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457318] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457332] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457345] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457358] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457372] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457385] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457400] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457414] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457427] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457467] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457480] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457507] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457523] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457553] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457567] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457582] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457594] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457607] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457646] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457688] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457716] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457764] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457784] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457795] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457821] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457833] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457938] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457952] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457963] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457974] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.457988] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458001] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458014] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458049] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458063] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458077] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458092] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458124] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458136] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458148] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458161] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458175] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458188] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458216] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458230] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458244] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458258] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458286] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458308] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458318] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458329] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458381] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458394] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458407] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458435] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458468] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458482] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458493] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458508] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458519] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458555] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458569] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458598] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458611] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458625] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458653] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.218 [2024-07-15 20:38:00.458666] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.219 [2024-07-15 20:38:00.458679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1410870 is same with the state(5) to be set 00:21:39.219 [2024-07-15 20:38:00.459792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.459831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.459854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.459865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.459892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.459902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.459914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.459923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.459935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.459945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.459956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:33104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.459966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.459977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.459987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.459998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:34352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:48504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:119672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.219 [2024-07-15 20:38:00.460893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.219 [2024-07-15 20:38:00.460904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.460915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.460927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.460937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.460949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.460959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.460971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.460980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.460992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:106560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:29944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:127432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.220 [2024-07-15 20:38:00.461827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.220 [2024-07-15 20:38:00.461838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.461848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.461859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.461878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.461890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.461900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.461911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.461921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.461932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.461942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.461954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.461964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.461976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.461985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.461997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:48432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:124256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.221 [2024-07-15 20:38:00.462557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.221 [2024-07-15 20:38:00.462568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.222 [2024-07-15 20:38:00.462578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.222 [2024-07-15 20:38:00.462604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:39.222 [2024-07-15 20:38:00.462614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:39.222 [2024-07-15 20:38:00.462623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16168 len:8 PRP1 0x0 PRP2 0x0 00:21:39.222 [2024-07-15 20:38:00.462634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.222 [2024-07-15 20:38:00.462677] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fd08d0 was disconnected and freed. reset controller. 00:21:39.222 [2024-07-15 20:38:00.462760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.222 [2024-07-15 20:38:00.462777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.222 [2024-07-15 20:38:00.462788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.222 [2024-07-15 20:38:00.462797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.222 [2024-07-15 20:38:00.462807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.222 [2024-07-15 20:38:00.462817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.222 [2024-07-15 20:38:00.462826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.222 [2024-07-15 20:38:00.462835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.222 [2024-07-15 20:38:00.462844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f63240 is same with the state(5) to be set 00:21:39.222 [2024-07-15 20:38:00.463118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:39.222 [2024-07-15 20:38:00.463144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63240 (9): Bad file descriptor 00:21:39.222 [2024-07-15 20:38:00.463253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.222 [2024-07-15 20:38:00.463276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f63240 with addr=10.0.0.2, port=4420 00:21:39.222 [2024-07-15 20:38:00.463287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f63240 is same with the state(5) to be set 00:21:39.222 [2024-07-15 20:38:00.463305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63240 (9): Bad file descriptor 00:21:39.222 [2024-07-15 20:38:00.463322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:39.222 [2024-07-15 20:38:00.463332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:39.222 [2024-07-15 20:38:00.463342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:39.222 [2024-07-15 20:38:00.463362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:39.222 [2024-07-15 20:38:00.463374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:39.222 20:38:00 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 96609 00:21:41.122 [2024-07-15 20:38:02.478764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:41.122 [2024-07-15 20:38:02.478832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f63240 with addr=10.0.0.2, port=4420 00:21:41.122 [2024-07-15 20:38:02.478849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f63240 is same with the state(5) to be set 00:21:41.122 [2024-07-15 20:38:02.478887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63240 (9): Bad file descriptor 00:21:41.122 [2024-07-15 20:38:02.478908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:41.122 [2024-07-15 20:38:02.478918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:41.122 [2024-07-15 20:38:02.478929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:41.122 [2024-07-15 20:38:02.478955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:41.122 [2024-07-15 20:38:02.478966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:43.021 [2024-07-15 20:38:04.479151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:43.021 [2024-07-15 20:38:04.479216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f63240 with addr=10.0.0.2, port=4420 00:21:43.021 [2024-07-15 20:38:04.479241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f63240 is same with the state(5) to be set 00:21:43.022 [2024-07-15 20:38:04.479279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63240 (9): Bad file descriptor 00:21:43.022 [2024-07-15 20:38:04.479301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:43.022 [2024-07-15 20:38:04.479312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:43.022 [2024-07-15 20:38:04.479323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:43.022 [2024-07-15 20:38:04.479350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:43.022 [2024-07-15 20:38:04.479362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:45.567 [2024-07-15 20:38:06.479420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:45.568 [2024-07-15 20:38:06.479463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:45.568 [2024-07-15 20:38:06.479475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:45.568 [2024-07-15 20:38:06.479486] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:45.568 [2024-07-15 20:38:06.479515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.134 00:21:46.134 Latency(us) 00:21:46.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.134 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:21:46.134 NVMe0n1 : 8.19 2532.77 9.89 15.62 0.00 50165.08 2502.28 7015926.69 00:21:46.134 =================================================================================================================== 00:21:46.134 Total : 2532.77 9.89 15.62 0.00 50165.08 2502.28 7015926.69 00:21:46.134 0 00:21:46.134 20:38:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:46.134 Attaching 5 probes... 00:21:46.134 1431.604729: reset bdev controller NVMe0 00:21:46.134 1431.677953: reconnect bdev controller NVMe0 00:21:46.134 3447.108589: reconnect delay bdev controller NVMe0 00:21:46.134 3447.134292: reconnect bdev controller NVMe0 00:21:46.134 5447.494332: reconnect delay bdev controller NVMe0 00:21:46.134 5447.519201: reconnect bdev controller NVMe0 00:21:46.134 7447.888676: reconnect delay bdev controller NVMe0 00:21:46.134 7447.908605: reconnect bdev controller NVMe0 00:21:46.134 20:38:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:21:46.134 20:38:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:21:46.134 20:38:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 96556 00:21:46.134 20:38:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:46.134 20:38:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96541 00:21:46.134 20:38:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 96541 ']' 00:21:46.134 20:38:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 96541 00:21:46.134 20:38:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:46.134 20:38:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:46.134 20:38:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96541 00:21:46.134 20:38:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:46.134 20:38:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:46.134 killing process with pid 96541 00:21:46.134 20:38:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96541' 00:21:46.134 20:38:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 96541 00:21:46.134 Received shutdown signal, test time was about 8.245131 seconds 00:21:46.134 00:21:46.134 Latency(us) 00:21:46.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.134 =================================================================================================================== 00:21:46.134 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:46.134 20:38:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 96541 00:21:46.391 20:38:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:46.648 20:38:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:21:46.648 20:38:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:21:46.648 20:38:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:46.648 20:38:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:21:46.648 20:38:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:46.648 20:38:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:21:46.648 20:38:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:46.648 20:38:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:46.648 rmmod nvme_tcp 00:21:46.648 rmmod nvme_fabrics 00:21:46.648 rmmod nvme_keyring 00:21:46.648 20:38:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:46.648 20:38:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:21:46.648 20:38:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:21:46.648 20:38:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 95989 ']' 00:21:46.648 20:38:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 95989 00:21:46.648 20:38:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 95989 ']' 00:21:46.648 20:38:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 95989 00:21:46.648 20:38:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:21:46.648 20:38:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:46.648 20:38:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 95989 00:21:46.648 20:38:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:46.648 20:38:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:46.648 killing process with pid 95989 00:21:46.648 20:38:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 95989' 00:21:46.648 20:38:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 95989 00:21:46.649 20:38:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 95989 00:21:46.905 20:38:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:46.905 20:38:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:46.905 20:38:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:46.905 20:38:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:46.905 20:38:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:46.905 20:38:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.905 20:38:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.905 20:38:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.905 20:38:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:46.905 00:21:46.905 real 0m45.361s 00:21:46.905 user 2m14.351s 00:21:46.905 sys 0m4.689s 00:21:46.905 20:38:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:46.905 20:38:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:46.905 ************************************ 00:21:46.905 END TEST nvmf_timeout 00:21:46.905 ************************************ 00:21:46.905 20:38:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:46.905 20:38:08 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:21:46.905 20:38:08 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:21:46.905 20:38:08 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:46.905 20:38:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:46.905 20:38:08 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:21:46.905 00:21:46.905 real 15m40.683s 00:21:46.905 user 41m44.172s 00:21:46.905 sys 3m17.739s 00:21:46.905 20:38:08 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:46.905 20:38:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:46.905 ************************************ 00:21:46.905 END TEST nvmf_tcp 00:21:46.905 ************************************ 00:21:47.163 20:38:08 -- common/autotest_common.sh@1142 -- # return 0 00:21:47.163 20:38:08 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:21:47.163 20:38:08 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:21:47.163 20:38:08 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:47.163 20:38:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:47.163 20:38:08 -- common/autotest_common.sh@10 -- # set +x 00:21:47.163 ************************************ 00:21:47.163 START TEST spdkcli_nvmf_tcp 00:21:47.163 ************************************ 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:21:47.163 * Looking for test storage... 00:21:47.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=96834 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 96834 00:21:47.163 20:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 96834 ']' 00:21:47.164 20:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.164 20:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:47.164 20:38:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:21:47.164 20:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.164 20:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:47.164 20:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:47.164 [2024-07-15 20:38:08.596405] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:21:47.164 [2024-07-15 20:38:08.597218] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96834 ] 00:21:47.421 [2024-07-15 20:38:08.730872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:47.421 [2024-07-15 20:38:08.794153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.421 [2024-07-15 20:38:08.794164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.421 20:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:47.421 20:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:21:47.421 20:38:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:21:47.421 20:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:47.421 20:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:47.421 20:38:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:21:47.421 20:38:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:21:47.421 20:38:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:21:47.421 20:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:47.421 20:38:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:47.679 20:38:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:47.679 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:47.679 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:21:47.679 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:21:47.679 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:21:47.679 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:21:47.679 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:21:47.679 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:47.679 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:21:47.679 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:21:47.679 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:47.679 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:47.679 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:21:47.679 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:47.679 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:47.679 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:21:47.679 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:21:47.679 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:21:47.679 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:47.679 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:47.679 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:21:47.679 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:21:47.679 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:21:47.679 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:21:47.679 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:47.679 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:21:47.679 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:21:47.679 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:21:47.679 ' 00:21:50.206 [2024-07-15 20:38:11.636857] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.586 [2024-07-15 20:38:12.925959] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:21:54.112 [2024-07-15 20:38:15.307464] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:21:56.013 [2024-07-15 20:38:17.360886] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:21:57.917 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:21:57.917 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:21:57.917 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:21:57.917 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:21:57.917 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:21:57.917 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:21:57.917 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:21:57.917 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:57.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:21:57.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:21:57.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:57.917 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:57.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:21:57.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:57.917 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:57.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:21:57.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:21:57.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:57.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:57.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:57.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:21:57.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:21:57.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:21:57.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:21:57.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:57.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:21:57.917 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:21:57.917 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:21:57.917 20:38:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:21:57.917 20:38:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:57.917 20:38:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:57.917 20:38:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:21:57.917 20:38:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:57.917 20:38:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:57.917 20:38:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:21:57.917 20:38:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:21:58.177 20:38:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:21:58.177 20:38:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:21:58.177 20:38:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:21:58.177 20:38:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:58.177 20:38:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:58.177 20:38:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:21:58.177 20:38:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:58.177 20:38:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:58.177 20:38:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:21:58.177 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:21:58.177 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:58.177 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:21:58.177 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:21:58.177 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:21:58.177 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:21:58.177 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:58.177 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:21:58.177 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:21:58.177 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:21:58.177 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:21:58.177 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:21:58.177 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:21:58.177 ' 00:22:03.442 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:22:03.442 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:22:03.442 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:03.442 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:22:03.442 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:22:03.442 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:22:03.442 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:22:03.442 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:03.442 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:22:03.442 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:22:03.442 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:22:03.442 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:22:03.442 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:22:03.442 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:22:03.700 20:38:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:22:03.700 20:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:03.700 20:38:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:03.700 20:38:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 96834 00:22:03.700 20:38:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 96834 ']' 00:22:03.700 20:38:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 96834 00:22:03.700 20:38:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:22:03.700 20:38:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:03.700 20:38:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 96834 00:22:03.700 20:38:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:03.700 killing process with pid 96834 00:22:03.700 20:38:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:03.700 20:38:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 96834' 00:22:03.700 20:38:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 96834 00:22:03.700 20:38:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 96834 00:22:03.957 20:38:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:22:03.958 20:38:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:22:03.958 20:38:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 96834 ']' 00:22:03.958 20:38:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 96834 00:22:03.958 20:38:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 96834 ']' 00:22:03.958 20:38:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 96834 00:22:03.958 Process with pid 96834 is not found 00:22:03.958 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (96834) - No such process 00:22:03.958 20:38:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 96834 is not found' 00:22:03.958 20:38:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:22:03.958 20:38:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:22:03.958 20:38:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:22:03.958 00:22:03.958 real 0m16.777s 00:22:03.958 user 0m36.455s 00:22:03.958 sys 0m0.837s 00:22:03.958 20:38:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:03.958 20:38:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:03.958 ************************************ 00:22:03.958 END TEST spdkcli_nvmf_tcp 00:22:03.958 ************************************ 00:22:03.958 20:38:25 -- common/autotest_common.sh@1142 -- # return 0 00:22:03.958 20:38:25 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:22:03.958 20:38:25 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:03.958 20:38:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:03.958 20:38:25 -- common/autotest_common.sh@10 -- # set +x 00:22:03.958 ************************************ 00:22:03.958 START TEST nvmf_identify_passthru 00:22:03.958 ************************************ 00:22:03.958 20:38:25 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:22:03.958 * Looking for test storage... 00:22:03.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:03.958 20:38:25 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:03.958 20:38:25 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:03.958 20:38:25 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:03.958 20:38:25 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:03.958 20:38:25 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.958 20:38:25 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.958 20:38:25 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.958 20:38:25 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:22:03.958 20:38:25 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:03.958 20:38:25 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:03.958 20:38:25 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:03.958 20:38:25 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:03.958 20:38:25 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:03.958 20:38:25 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.958 20:38:25 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.958 20:38:25 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.958 20:38:25 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:22:03.958 20:38:25 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.958 20:38:25 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.958 20:38:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:03.958 20:38:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:03.958 Cannot find device "nvmf_tgt_br" 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:03.958 Cannot find device "nvmf_tgt_br2" 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:03.958 Cannot find device "nvmf_tgt_br" 00:22:03.958 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:22:03.959 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:03.959 Cannot find device "nvmf_tgt_br2" 00:22:03.959 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:22:03.959 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:04.215 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:04.215 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:04.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:04.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:22:04.215 00:22:04.215 --- 10.0.0.2 ping statistics --- 00:22:04.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.215 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:04.215 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:04.215 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:22:04.215 00:22:04.215 --- 10.0.0.3 ping statistics --- 00:22:04.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.215 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:04.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:04.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:22:04.215 00:22:04.215 --- 10.0.0.1 ping statistics --- 00:22:04.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.215 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:04.215 20:38:25 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:04.474 20:38:25 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:22:04.474 20:38:25 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:04.474 20:38:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:04.474 20:38:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:22:04.474 20:38:25 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:22:04.474 20:38:25 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:22:04.474 20:38:25 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:22:04.474 20:38:25 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:22:04.474 20:38:25 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:22:04.474 20:38:25 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:22:04.474 20:38:25 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:04.474 20:38:25 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:04.474 20:38:25 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:22:04.474 20:38:25 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:22:04.474 20:38:25 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:04.474 20:38:25 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:22:04.474 20:38:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:22:04.474 20:38:25 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:22:04.474 20:38:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:22:04.474 20:38:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:22:04.474 20:38:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:22:04.733 20:38:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:22:04.733 20:38:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:22:04.733 20:38:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:22:04.733 20:38:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:22:04.733 20:38:26 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:22:04.733 20:38:26 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:22:04.733 20:38:26 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:04.733 20:38:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:04.733 20:38:26 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:22:04.733 20:38:26 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:04.733 20:38:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:04.733 20:38:26 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=97307 00:22:04.733 20:38:26 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:04.733 20:38:26 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:04.733 20:38:26 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 97307 00:22:04.733 20:38:26 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 97307 ']' 00:22:04.733 20:38:26 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.733 20:38:26 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:04.733 20:38:26 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.733 20:38:26 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:04.733 20:38:26 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:04.993 [2024-07-15 20:38:26.253384] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:22:04.993 [2024-07-15 20:38:26.253497] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.993 [2024-07-15 20:38:26.395176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:04.993 [2024-07-15 20:38:26.489346] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.993 [2024-07-15 20:38:26.489402] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.993 [2024-07-15 20:38:26.489414] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.993 [2024-07-15 20:38:26.489422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.993 [2024-07-15 20:38:26.489429] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.993 [2024-07-15 20:38:26.489588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.993 [2024-07-15 20:38:26.489862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:04.993 [2024-07-15 20:38:26.489863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.993 [2024-07-15 20:38:26.490573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:22:05.970 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.970 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:05.970 [2024-07-15 20:38:27.322546] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.970 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:05.970 [2024-07-15 20:38:27.335933] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.970 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:05.970 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:05.970 Nvme0n1 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.970 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.970 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.970 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:06.228 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.228 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:06.228 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.228 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:06.228 [2024-07-15 20:38:27.476479] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:06.228 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.228 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:22:06.228 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.228 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:06.228 [ 00:22:06.228 { 00:22:06.228 "allow_any_host": true, 00:22:06.228 "hosts": [], 00:22:06.228 "listen_addresses": [], 00:22:06.228 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:06.228 "subtype": "Discovery" 00:22:06.228 }, 00:22:06.228 { 00:22:06.228 "allow_any_host": true, 00:22:06.228 "hosts": [], 00:22:06.228 "listen_addresses": [ 00:22:06.228 { 00:22:06.228 "adrfam": "IPv4", 00:22:06.228 "traddr": "10.0.0.2", 00:22:06.228 "trsvcid": "4420", 00:22:06.228 "trtype": "TCP" 00:22:06.228 } 00:22:06.228 ], 00:22:06.228 "max_cntlid": 65519, 00:22:06.228 "max_namespaces": 1, 00:22:06.228 "min_cntlid": 1, 00:22:06.228 "model_number": "SPDK bdev Controller", 00:22:06.228 "namespaces": [ 00:22:06.228 { 00:22:06.228 "bdev_name": "Nvme0n1", 00:22:06.228 "name": "Nvme0n1", 00:22:06.228 "nguid": "FDB4D4E899464C87991DD081FADAA10F", 00:22:06.228 "nsid": 1, 00:22:06.228 "uuid": "fdb4d4e8-9946-4c87-991d-d081fadaa10f" 00:22:06.228 } 00:22:06.228 ], 00:22:06.228 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:06.228 "serial_number": "SPDK00000000000001", 00:22:06.228 "subtype": "NVMe" 00:22:06.228 } 00:22:06.228 ] 00:22:06.228 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.228 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:06.228 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:22:06.228 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:22:06.228 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:22:06.228 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:06.228 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:22:06.228 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:22:06.485 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:22:06.485 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:22:06.485 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:22:06.485 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:06.485 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.485 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:06.485 20:38:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.485 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:22:06.485 20:38:27 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:22:06.485 20:38:27 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:06.485 20:38:27 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:22:06.760 20:38:27 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:06.760 20:38:27 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:22:06.760 20:38:27 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:06.760 20:38:27 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:06.760 rmmod nvme_tcp 00:22:06.760 rmmod nvme_fabrics 00:22:06.760 rmmod nvme_keyring 00:22:06.760 20:38:28 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:06.760 20:38:28 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:22:06.760 20:38:28 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:22:06.760 20:38:28 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 97307 ']' 00:22:06.760 20:38:28 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 97307 00:22:06.760 20:38:28 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 97307 ']' 00:22:06.760 20:38:28 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 97307 00:22:06.760 20:38:28 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:22:06.760 20:38:28 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:06.760 20:38:28 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97307 00:22:06.760 20:38:28 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:06.760 20:38:28 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:06.760 killing process with pid 97307 00:22:06.760 20:38:28 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97307' 00:22:06.760 20:38:28 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 97307 00:22:06.760 20:38:28 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 97307 00:22:06.760 20:38:28 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:06.760 20:38:28 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:06.760 20:38:28 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:06.760 20:38:28 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:06.760 20:38:28 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:06.760 20:38:28 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.760 20:38:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:06.760 20:38:28 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.019 20:38:28 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:07.019 00:22:07.019 real 0m3.003s 00:22:07.019 user 0m7.474s 00:22:07.019 sys 0m0.730s 00:22:07.019 20:38:28 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:07.019 ************************************ 00:22:07.019 END TEST nvmf_identify_passthru 00:22:07.019 20:38:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:22:07.019 ************************************ 00:22:07.019 20:38:28 -- common/autotest_common.sh@1142 -- # return 0 00:22:07.019 20:38:28 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:07.019 20:38:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:07.019 20:38:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:07.019 20:38:28 -- common/autotest_common.sh@10 -- # set +x 00:22:07.019 ************************************ 00:22:07.019 START TEST nvmf_dif 00:22:07.019 ************************************ 00:22:07.019 20:38:28 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:22:07.019 * Looking for test storage... 00:22:07.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:07.019 20:38:28 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:07.019 20:38:28 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.019 20:38:28 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.019 20:38:28 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.019 20:38:28 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.019 20:38:28 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.019 20:38:28 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.019 20:38:28 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:22:07.019 20:38:28 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:07.019 20:38:28 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:22:07.019 20:38:28 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:22:07.019 20:38:28 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:22:07.019 20:38:28 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:22:07.019 20:38:28 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.019 20:38:28 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:07.019 20:38:28 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:07.019 20:38:28 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.020 20:38:28 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:07.020 20:38:28 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:07.020 20:38:28 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:07.020 20:38:28 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:07.020 20:38:28 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:07.020 20:38:28 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:07.020 Cannot find device "nvmf_tgt_br" 00:22:07.020 20:38:28 nvmf_dif -- nvmf/common.sh@155 -- # true 00:22:07.020 20:38:28 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:07.020 Cannot find device "nvmf_tgt_br2" 00:22:07.020 20:38:28 nvmf_dif -- nvmf/common.sh@156 -- # true 00:22:07.020 20:38:28 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:07.020 20:38:28 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:07.020 Cannot find device "nvmf_tgt_br" 00:22:07.020 20:38:28 nvmf_dif -- nvmf/common.sh@158 -- # true 00:22:07.020 20:38:28 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:07.020 Cannot find device "nvmf_tgt_br2" 00:22:07.020 20:38:28 nvmf_dif -- nvmf/common.sh@159 -- # true 00:22:07.020 20:38:28 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:07.278 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@162 -- # true 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:07.278 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@163 -- # true 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:07.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:22:07.278 00:22:07.278 --- 10.0.0.2 ping statistics --- 00:22:07.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.278 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:07.278 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:07.278 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:22:07.278 00:22:07.278 --- 10.0.0.3 ping statistics --- 00:22:07.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.278 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:07.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:22:07.278 00:22:07.278 --- 10.0.0.1 ping statistics --- 00:22:07.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.278 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:22:07.278 20:38:28 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:07.537 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:07.537 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:07.537 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:22:07.795 20:38:29 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.795 20:38:29 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:07.795 20:38:29 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:07.795 20:38:29 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.795 20:38:29 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:07.795 20:38:29 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:07.795 20:38:29 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:22:07.795 20:38:29 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:22:07.795 20:38:29 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:07.795 20:38:29 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:07.795 20:38:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:07.795 20:38:29 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=97650 00:22:07.795 20:38:29 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:07.795 20:38:29 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 97650 00:22:07.795 20:38:29 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 97650 ']' 00:22:07.795 20:38:29 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.795 20:38:29 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:07.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.795 20:38:29 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.795 20:38:29 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:07.795 20:38:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:07.795 [2024-07-15 20:38:29.140297] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:22:07.795 [2024-07-15 20:38:29.140399] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.795 [2024-07-15 20:38:29.281273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.053 [2024-07-15 20:38:29.356647] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.053 [2024-07-15 20:38:29.356710] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.053 [2024-07-15 20:38:29.356723] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.053 [2024-07-15 20:38:29.356750] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.053 [2024-07-15 20:38:29.356760] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.053 [2024-07-15 20:38:29.356796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.053 20:38:29 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:08.053 20:38:29 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:22:08.053 20:38:29 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:08.053 20:38:29 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:08.053 20:38:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:08.053 20:38:29 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.053 20:38:29 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:22:08.053 20:38:29 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:22:08.053 20:38:29 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.053 20:38:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:08.053 [2024-07-15 20:38:29.478356] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.053 20:38:29 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.053 20:38:29 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:22:08.053 20:38:29 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:08.053 20:38:29 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.053 20:38:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:08.053 ************************************ 00:22:08.053 START TEST fio_dif_1_default 00:22:08.053 ************************************ 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:08.053 bdev_null0 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:08.053 [2024-07-15 20:38:29.522469] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:22:08.053 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:08.054 { 00:22:08.054 "params": { 00:22:08.054 "name": "Nvme$subsystem", 00:22:08.054 "trtype": "$TEST_TRANSPORT", 00:22:08.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:08.054 "adrfam": "ipv4", 00:22:08.054 "trsvcid": "$NVMF_PORT", 00:22:08.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:08.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:08.054 "hdgst": ${hdgst:-false}, 00:22:08.054 "ddgst": ${ddgst:-false} 00:22:08.054 }, 00:22:08.054 "method": "bdev_nvme_attach_controller" 00:22:08.054 } 00:22:08.054 EOF 00:22:08.054 )") 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:22:08.054 20:38:29 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:08.054 "params": { 00:22:08.054 "name": "Nvme0", 00:22:08.054 "trtype": "tcp", 00:22:08.054 "traddr": "10.0.0.2", 00:22:08.054 "adrfam": "ipv4", 00:22:08.054 "trsvcid": "4420", 00:22:08.054 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:08.054 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:08.054 "hdgst": false, 00:22:08.054 "ddgst": false 00:22:08.054 }, 00:22:08.054 "method": "bdev_nvme_attach_controller" 00:22:08.054 }' 00:22:08.312 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:08.312 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:08.312 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:08.312 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:08.312 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:08.312 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:08.312 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:08.312 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:08.312 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:08.312 20:38:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:08.312 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:08.312 fio-3.35 00:22:08.312 Starting 1 thread 00:22:20.529 00:22:20.529 filename0: (groupid=0, jobs=1): err= 0: pid=97721: Mon Jul 15 20:38:40 2024 00:22:20.529 read: IOPS=2110, BW=8444KiB/s (8647kB/s)(82.5MiB/10001msec) 00:22:20.529 slat (nsec): min=7728, max=58505, avg=8972.59, stdev=2821.58 00:22:20.529 clat (usec): min=452, max=41698, avg=1868.04, stdev=7333.19 00:22:20.529 lat (usec): min=460, max=41709, avg=1877.01, stdev=7333.32 00:22:20.529 clat percentiles (usec): 00:22:20.529 | 1.00th=[ 457], 5.00th=[ 465], 10.00th=[ 469], 20.00th=[ 478], 00:22:20.529 | 30.00th=[ 482], 40.00th=[ 486], 50.00th=[ 490], 60.00th=[ 494], 00:22:20.529 | 70.00th=[ 502], 80.00th=[ 510], 90.00th=[ 529], 95.00th=[ 627], 00:22:20.529 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:22:20.529 | 99.99th=[41681] 00:22:20.529 bw ( KiB/s): min= 1405, max=18592, per=97.50%, avg=8233.95, stdev=5096.57, samples=19 00:22:20.529 iops : min= 351, max= 4648, avg=2058.47, stdev=1274.16, samples=19 00:22:20.529 lat (usec) : 500=69.42%, 750=27.14%, 1000=0.03% 00:22:20.529 lat (msec) : 2=0.02%, 50=3.39% 00:22:20.529 cpu : usr=90.74%, sys=8.15%, ctx=36, majf=0, minf=9 00:22:20.529 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:20.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:20.529 issued rwts: total=21112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:20.529 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:20.529 00:22:20.529 Run status group 0 (all jobs): 00:22:20.529 READ: bw=8444KiB/s (8647kB/s), 8444KiB/s-8444KiB/s (8647kB/s-8647kB/s), io=82.5MiB (86.5MB), run=10001-10001msec 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.529 00:22:20.529 real 0m10.923s 00:22:20.529 user 0m9.679s 00:22:20.529 sys 0m1.033s 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:22:20.529 ************************************ 00:22:20.529 END TEST fio_dif_1_default 00:22:20.529 ************************************ 00:22:20.529 20:38:40 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:22:20.529 20:38:40 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:22:20.529 20:38:40 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:20.529 20:38:40 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:20.529 20:38:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:20.529 ************************************ 00:22:20.529 START TEST fio_dif_1_multi_subsystems 00:22:20.529 ************************************ 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:20.529 bdev_null0 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:20.529 [2024-07-15 20:38:40.493599] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:22:20.529 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:20.530 bdev_null1 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.530 { 00:22:20.530 "params": { 00:22:20.530 "name": "Nvme$subsystem", 00:22:20.530 "trtype": "$TEST_TRANSPORT", 00:22:20.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.530 "adrfam": "ipv4", 00:22:20.530 "trsvcid": "$NVMF_PORT", 00:22:20.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.530 "hdgst": ${hdgst:-false}, 00:22:20.530 "ddgst": ${ddgst:-false} 00:22:20.530 }, 00:22:20.530 "method": "bdev_nvme_attach_controller" 00:22:20.530 } 00:22:20.530 EOF 00:22:20.530 )") 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:20.530 { 00:22:20.530 "params": { 00:22:20.530 "name": "Nvme$subsystem", 00:22:20.530 "trtype": "$TEST_TRANSPORT", 00:22:20.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:20.530 "adrfam": "ipv4", 00:22:20.530 "trsvcid": "$NVMF_PORT", 00:22:20.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:20.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:20.530 "hdgst": ${hdgst:-false}, 00:22:20.530 "ddgst": ${ddgst:-false} 00:22:20.530 }, 00:22:20.530 "method": "bdev_nvme_attach_controller" 00:22:20.530 } 00:22:20.530 EOF 00:22:20.530 )") 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:20.530 "params": { 00:22:20.530 "name": "Nvme0", 00:22:20.530 "trtype": "tcp", 00:22:20.530 "traddr": "10.0.0.2", 00:22:20.530 "adrfam": "ipv4", 00:22:20.530 "trsvcid": "4420", 00:22:20.530 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:20.530 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:20.530 "hdgst": false, 00:22:20.530 "ddgst": false 00:22:20.530 }, 00:22:20.530 "method": "bdev_nvme_attach_controller" 00:22:20.530 },{ 00:22:20.530 "params": { 00:22:20.530 "name": "Nvme1", 00:22:20.530 "trtype": "tcp", 00:22:20.530 "traddr": "10.0.0.2", 00:22:20.530 "adrfam": "ipv4", 00:22:20.530 "trsvcid": "4420", 00:22:20.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:20.530 "hdgst": false, 00:22:20.530 "ddgst": false 00:22:20.530 }, 00:22:20.530 "method": "bdev_nvme_attach_controller" 00:22:20.530 }' 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:20.530 20:38:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:20.530 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:20.530 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:22:20.530 fio-3.35 00:22:20.530 Starting 2 threads 00:22:30.494 00:22:30.494 filename0: (groupid=0, jobs=1): err= 0: pid=97879: Mon Jul 15 20:38:51 2024 00:22:30.494 read: IOPS=197, BW=792KiB/s (811kB/s)(7920KiB/10005msec) 00:22:30.494 slat (nsec): min=7802, max=67482, avg=10895.53, stdev=6404.51 00:22:30.494 clat (usec): min=460, max=42072, avg=20176.03, stdev=20257.77 00:22:30.494 lat (usec): min=468, max=42105, avg=20186.92, stdev=20257.91 00:22:30.494 clat percentiles (usec): 00:22:30.494 | 1.00th=[ 474], 5.00th=[ 490], 10.00th=[ 498], 20.00th=[ 515], 00:22:30.494 | 30.00th=[ 545], 40.00th=[ 594], 50.00th=[ 1156], 60.00th=[41157], 00:22:30.494 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:22:30.494 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:22:30.494 | 99.99th=[42206] 00:22:30.494 bw ( KiB/s): min= 416, max= 1696, per=47.84%, avg=795.00, stdev=331.58, samples=19 00:22:30.494 iops : min= 104, max= 424, avg=198.74, stdev=82.89, samples=19 00:22:30.494 lat (usec) : 500=11.41%, 750=34.65%, 1000=0.61% 00:22:30.494 lat (msec) : 2=4.85%, 4=0.20%, 50=48.28% 00:22:30.494 cpu : usr=94.99%, sys=4.53%, ctx=11, majf=0, minf=0 00:22:30.494 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:30.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.494 issued rwts: total=1980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:30.494 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:30.494 filename1: (groupid=0, jobs=1): err= 0: pid=97880: Mon Jul 15 20:38:51 2024 00:22:30.494 read: IOPS=217, BW=871KiB/s (892kB/s)(8720KiB/10014msec) 00:22:30.494 slat (nsec): min=7752, max=59732, avg=10574.17, stdev=6078.36 00:22:30.494 clat (usec): min=456, max=42157, avg=18338.51, stdev=20125.63 00:22:30.494 lat (usec): min=464, max=42182, avg=18349.08, stdev=20125.98 00:22:30.494 clat percentiles (usec): 00:22:30.494 | 1.00th=[ 469], 5.00th=[ 482], 10.00th=[ 490], 20.00th=[ 502], 00:22:30.494 | 30.00th=[ 515], 40.00th=[ 570], 50.00th=[ 627], 60.00th=[40633], 00:22:30.494 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:22:30.494 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:22:30.494 | 99.99th=[42206] 00:22:30.494 bw ( KiB/s): min= 512, max= 2400, per=52.36%, avg=870.15, stdev=391.36, samples=20 00:22:30.494 iops : min= 128, max= 600, avg=217.50, stdev=97.85, samples=20 00:22:30.494 lat (usec) : 500=19.13%, 750=34.63%, 1000=0.18% 00:22:30.494 lat (msec) : 2=2.20%, 50=43.85% 00:22:30.494 cpu : usr=95.47%, sys=4.02%, ctx=14, majf=0, minf=0 00:22:30.494 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:30.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:30.494 issued rwts: total=2180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:30.494 latency : target=0, window=0, percentile=100.00%, depth=4 00:22:30.494 00:22:30.494 Run status group 0 (all jobs): 00:22:30.494 READ: bw=1662KiB/s (1702kB/s), 792KiB/s-871KiB/s (811kB/s-892kB/s), io=16.2MiB (17.0MB), run=10005-10014msec 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.494 00:22:30.494 real 0m11.094s 00:22:30.494 user 0m19.852s 00:22:30.494 sys 0m1.102s 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:30.494 20:38:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:22:30.494 ************************************ 00:22:30.494 END TEST fio_dif_1_multi_subsystems 00:22:30.494 ************************************ 00:22:30.494 20:38:51 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:22:30.494 20:38:51 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:22:30.494 20:38:51 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:30.494 20:38:51 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:30.494 20:38:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:30.494 ************************************ 00:22:30.494 START TEST fio_dif_rand_params 00:22:30.494 ************************************ 00:22:30.494 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:22:30.494 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:22:30.494 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:30.495 bdev_null0 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:30.495 [2024-07-15 20:38:51.634553] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:30.495 { 00:22:30.495 "params": { 00:22:30.495 "name": "Nvme$subsystem", 00:22:30.495 "trtype": "$TEST_TRANSPORT", 00:22:30.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.495 "adrfam": "ipv4", 00:22:30.495 "trsvcid": "$NVMF_PORT", 00:22:30.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.495 "hdgst": ${hdgst:-false}, 00:22:30.495 "ddgst": ${ddgst:-false} 00:22:30.495 }, 00:22:30.495 "method": "bdev_nvme_attach_controller" 00:22:30.495 } 00:22:30.495 EOF 00:22:30.495 )") 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:30.495 "params": { 00:22:30.495 "name": "Nvme0", 00:22:30.495 "trtype": "tcp", 00:22:30.495 "traddr": "10.0.0.2", 00:22:30.495 "adrfam": "ipv4", 00:22:30.495 "trsvcid": "4420", 00:22:30.495 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:30.495 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:30.495 "hdgst": false, 00:22:30.495 "ddgst": false 00:22:30.495 }, 00:22:30.495 "method": "bdev_nvme_attach_controller" 00:22:30.495 }' 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:30.495 20:38:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:30.495 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:30.495 ... 00:22:30.495 fio-3.35 00:22:30.495 Starting 3 threads 00:22:37.049 00:22:37.049 filename0: (groupid=0, jobs=1): err= 0: pid=98026: Mon Jul 15 20:38:57 2024 00:22:37.049 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(137MiB/5004msec) 00:22:37.049 slat (nsec): min=7878, max=73890, avg=13425.52, stdev=4933.72 00:22:37.049 clat (usec): min=4908, max=56079, avg=13712.50, stdev=6783.75 00:22:37.049 lat (usec): min=4922, max=56103, avg=13725.92, stdev=6784.00 00:22:37.049 clat percentiles (usec): 00:22:37.049 | 1.00th=[ 7373], 5.00th=[ 8717], 10.00th=[10814], 20.00th=[11731], 00:22:37.049 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:22:37.049 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14615], 95.00th=[15795], 00:22:37.049 | 99.00th=[53740], 99.50th=[55313], 99.90th=[55837], 99.95th=[55837], 00:22:37.049 | 99.99th=[55837] 00:22:37.049 bw ( KiB/s): min=21760, max=30720, per=32.11%, avg=27909.80, stdev=2959.44, samples=10 00:22:37.049 iops : min= 170, max= 240, avg=218.00, stdev=23.09, samples=10 00:22:37.049 lat (msec) : 10=6.40%, 20=90.85%, 50=0.55%, 100=2.20% 00:22:37.049 cpu : usr=92.00%, sys=6.52%, ctx=15, majf=0, minf=0 00:22:37.049 IO depths : 1=6.0%, 2=94.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:37.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.049 issued rwts: total=1093,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.049 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:37.049 filename0: (groupid=0, jobs=1): err= 0: pid=98027: Mon Jul 15 20:38:57 2024 00:22:37.049 read: IOPS=251, BW=31.5MiB/s (33.0MB/s)(158MiB/5005msec) 00:22:37.049 slat (nsec): min=4734, max=52461, avg=13915.32, stdev=4440.88 00:22:37.049 clat (usec): min=6464, max=54688, avg=11884.60, stdev=5772.06 00:22:37.049 lat (usec): min=6476, max=54699, avg=11898.51, stdev=5772.74 00:22:37.049 clat percentiles (usec): 00:22:37.049 | 1.00th=[ 7046], 5.00th=[ 7963], 10.00th=[ 9110], 20.00th=[10159], 00:22:37.049 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:22:37.049 | 70.00th=[11731], 80.00th=[11994], 90.00th=[13042], 95.00th=[14091], 00:22:37.049 | 99.00th=[51643], 99.50th=[52691], 99.90th=[53740], 99.95th=[54789], 00:22:37.049 | 99.99th=[54789] 00:22:37.049 bw ( KiB/s): min=20224, max=38912, per=37.05%, avg=32204.80, stdev=5214.14, samples=10 00:22:37.049 iops : min= 158, max= 304, avg=251.60, stdev=40.74, samples=10 00:22:37.049 lat (msec) : 10=16.49%, 20=81.60%, 50=0.16%, 100=1.74% 00:22:37.049 cpu : usr=91.75%, sys=6.53%, ctx=7, majf=0, minf=0 00:22:37.049 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:37.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.049 issued rwts: total=1261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.049 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:37.049 filename0: (groupid=0, jobs=1): err= 0: pid=98028: Mon Jul 15 20:38:57 2024 00:22:37.049 read: IOPS=208, BW=26.1MiB/s (27.4MB/s)(131MiB/5005msec) 00:22:37.049 slat (nsec): min=7835, max=42922, avg=13555.56, stdev=4724.14 00:22:37.049 clat (usec): min=4182, max=58101, avg=14344.33, stdev=4192.63 00:22:37.049 lat (usec): min=4193, max=58109, avg=14357.88, stdev=4192.86 00:22:37.049 clat percentiles (usec): 00:22:37.049 | 1.00th=[ 4490], 5.00th=[ 4686], 10.00th=[ 9241], 20.00th=[11600], 00:22:37.049 | 30.00th=[14353], 40.00th=[14877], 50.00th=[15270], 60.00th=[15664], 00:22:37.049 | 70.00th=[16057], 80.00th=[16319], 90.00th=[17171], 95.00th=[18220], 00:22:37.049 | 99.00th=[20841], 99.50th=[21365], 99.90th=[56361], 99.95th=[57934], 00:22:37.049 | 99.99th=[57934] 00:22:37.050 bw ( KiB/s): min=22272, max=35584, per=30.69%, avg=26675.20, stdev=4150.39, samples=10 00:22:37.050 iops : min= 174, max= 278, avg=208.40, stdev=32.42, samples=10 00:22:37.050 lat (msec) : 10=15.02%, 20=83.25%, 50=1.44%, 100=0.29% 00:22:37.050 cpu : usr=92.15%, sys=6.33%, ctx=57, majf=0, minf=0 00:22:37.050 IO depths : 1=13.5%, 2=86.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:37.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.050 issued rwts: total=1045,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.050 latency : target=0, window=0, percentile=100.00%, depth=3 00:22:37.050 00:22:37.050 Run status group 0 (all jobs): 00:22:37.050 READ: bw=84.9MiB/s (89.0MB/s), 26.1MiB/s-31.5MiB/s (27.4MB/s-33.0MB/s), io=425MiB (446MB), run=5004-5005msec 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:37.050 bdev_null0 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:37.050 [2024-07-15 20:38:57.558028] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:37.050 bdev_null1 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:37.050 bdev_null2 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.050 { 00:22:37.050 "params": { 00:22:37.050 "name": "Nvme$subsystem", 00:22:37.050 "trtype": "$TEST_TRANSPORT", 00:22:37.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.050 "adrfam": "ipv4", 00:22:37.050 "trsvcid": "$NVMF_PORT", 00:22:37.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.050 "hdgst": ${hdgst:-false}, 00:22:37.050 "ddgst": ${ddgst:-false} 00:22:37.050 }, 00:22:37.050 "method": "bdev_nvme_attach_controller" 00:22:37.050 } 00:22:37.050 EOF 00:22:37.050 )") 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:37.050 20:38:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.050 { 00:22:37.050 "params": { 00:22:37.051 "name": "Nvme$subsystem", 00:22:37.051 "trtype": "$TEST_TRANSPORT", 00:22:37.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.051 "adrfam": "ipv4", 00:22:37.051 "trsvcid": "$NVMF_PORT", 00:22:37.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.051 "hdgst": ${hdgst:-false}, 00:22:37.051 "ddgst": ${ddgst:-false} 00:22:37.051 }, 00:22:37.051 "method": "bdev_nvme_attach_controller" 00:22:37.051 } 00:22:37.051 EOF 00:22:37.051 )") 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:37.051 { 00:22:37.051 "params": { 00:22:37.051 "name": "Nvme$subsystem", 00:22:37.051 "trtype": "$TEST_TRANSPORT", 00:22:37.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:37.051 "adrfam": "ipv4", 00:22:37.051 "trsvcid": "$NVMF_PORT", 00:22:37.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:37.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:37.051 "hdgst": ${hdgst:-false}, 00:22:37.051 "ddgst": ${ddgst:-false} 00:22:37.051 }, 00:22:37.051 "method": "bdev_nvme_attach_controller" 00:22:37.051 } 00:22:37.051 EOF 00:22:37.051 )") 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:37.051 "params": { 00:22:37.051 "name": "Nvme0", 00:22:37.051 "trtype": "tcp", 00:22:37.051 "traddr": "10.0.0.2", 00:22:37.051 "adrfam": "ipv4", 00:22:37.051 "trsvcid": "4420", 00:22:37.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:37.051 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:37.051 "hdgst": false, 00:22:37.051 "ddgst": false 00:22:37.051 }, 00:22:37.051 "method": "bdev_nvme_attach_controller" 00:22:37.051 },{ 00:22:37.051 "params": { 00:22:37.051 "name": "Nvme1", 00:22:37.051 "trtype": "tcp", 00:22:37.051 "traddr": "10.0.0.2", 00:22:37.051 "adrfam": "ipv4", 00:22:37.051 "trsvcid": "4420", 00:22:37.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.051 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:37.051 "hdgst": false, 00:22:37.051 "ddgst": false 00:22:37.051 }, 00:22:37.051 "method": "bdev_nvme_attach_controller" 00:22:37.051 },{ 00:22:37.051 "params": { 00:22:37.051 "name": "Nvme2", 00:22:37.051 "trtype": "tcp", 00:22:37.051 "traddr": "10.0.0.2", 00:22:37.051 "adrfam": "ipv4", 00:22:37.051 "trsvcid": "4420", 00:22:37.051 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:37.051 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:37.051 "hdgst": false, 00:22:37.051 "ddgst": false 00:22:37.051 }, 00:22:37.051 "method": "bdev_nvme_attach_controller" 00:22:37.051 }' 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:37.051 20:38:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:37.051 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:37.051 ... 00:22:37.051 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:37.051 ... 00:22:37.051 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:22:37.051 ... 00:22:37.051 fio-3.35 00:22:37.051 Starting 24 threads 00:22:49.310 00:22:49.310 filename0: (groupid=0, jobs=1): err= 0: pid=98124: Mon Jul 15 20:39:08 2024 00:22:49.310 read: IOPS=149, BW=596KiB/s (610kB/s)(5992KiB/10053msec) 00:22:49.310 slat (nsec): min=4921, max=58459, avg=13590.96, stdev=6205.36 00:22:49.310 clat (msec): min=6, max=404, avg=107.21, stdev=68.40 00:22:49.310 lat (msec): min=6, max=404, avg=107.23, stdev=68.40 00:22:49.310 clat percentiles (msec): 00:22:49.310 | 1.00th=[ 8], 5.00th=[ 46], 10.00th=[ 58], 20.00th=[ 69], 00:22:49.310 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 87], 60.00th=[ 96], 00:22:49.310 | 70.00th=[ 108], 80.00th=[ 121], 90.00th=[ 209], 95.00th=[ 262], 00:22:49.310 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:22:49.310 | 99.99th=[ 405] 00:22:49.310 bw ( KiB/s): min= 128, max= 1408, per=4.55%, avg=592.45, stdev=290.74, samples=20 00:22:49.310 iops : min= 32, max= 352, avg=148.10, stdev=72.68, samples=20 00:22:49.310 lat (msec) : 10=3.20%, 20=1.07%, 50=1.20%, 100=57.21%, 250=31.98% 00:22:49.310 lat (msec) : 500=5.34% 00:22:49.310 cpu : usr=36.90%, sys=1.30%, ctx=1253, majf=0, minf=9 00:22:49.310 IO depths : 1=2.1%, 2=4.6%, 4=14.4%, 8=67.7%, 16=11.2%, 32=0.0%, >=64=0.0% 00:22:49.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.310 complete : 0=0.0%, 4=91.4%, 8=3.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.310 issued rwts: total=1498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.310 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.310 filename0: (groupid=0, jobs=1): err= 0: pid=98125: Mon Jul 15 20:39:08 2024 00:22:49.310 read: IOPS=142, BW=570KiB/s (584kB/s)(5724KiB/10042msec) 00:22:49.310 slat (nsec): min=5885, max=54140, avg=13092.63, stdev=6113.75 00:22:49.310 clat (msec): min=40, max=347, avg=112.09, stdev=63.11 00:22:49.310 lat (msec): min=40, max=347, avg=112.10, stdev=63.11 00:22:49.310 clat percentiles (msec): 00:22:49.310 | 1.00th=[ 48], 5.00th=[ 52], 10.00th=[ 59], 20.00th=[ 72], 00:22:49.310 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 96], 00:22:49.310 | 70.00th=[ 115], 80.00th=[ 157], 90.00th=[ 213], 95.00th=[ 228], 00:22:49.310 | 99.00th=[ 300], 99.50th=[ 338], 99.90th=[ 347], 99.95th=[ 347], 00:22:49.310 | 99.99th=[ 347] 00:22:49.310 bw ( KiB/s): min= 128, max= 944, per=4.35%, avg=566.00, stdev=242.64, samples=20 00:22:49.311 iops : min= 32, max= 236, avg=141.50, stdev=60.66, samples=20 00:22:49.311 lat (msec) : 50=4.33%, 100=60.38%, 250=30.40%, 500=4.89% 00:22:49.311 cpu : usr=35.03%, sys=1.31%, ctx=946, majf=0, minf=9 00:22:49.311 IO depths : 1=1.3%, 2=2.7%, 4=10.0%, 8=73.9%, 16=12.1%, 32=0.0%, >=64=0.0% 00:22:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 complete : 0=0.0%, 4=90.0%, 8=5.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 issued rwts: total=1431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.311 filename0: (groupid=0, jobs=1): err= 0: pid=98126: Mon Jul 15 20:39:08 2024 00:22:49.311 read: IOPS=119, BW=478KiB/s (490kB/s)(4788KiB/10014msec) 00:22:49.311 slat (usec): min=7, max=4041, avg=17.91, stdev=116.57 00:22:49.311 clat (msec): min=30, max=408, avg=133.59, stdev=78.81 00:22:49.311 lat (msec): min=30, max=408, avg=133.60, stdev=78.81 00:22:49.311 clat percentiles (msec): 00:22:49.311 | 1.00th=[ 44], 5.00th=[ 56], 10.00th=[ 66], 20.00th=[ 78], 00:22:49.311 | 30.00th=[ 91], 40.00th=[ 107], 50.00th=[ 112], 60.00th=[ 116], 00:22:49.311 | 70.00th=[ 133], 80.00th=[ 155], 90.00th=[ 279], 95.00th=[ 313], 00:22:49.311 | 99.00th=[ 409], 99.50th=[ 409], 99.90th=[ 409], 99.95th=[ 409], 00:22:49.311 | 99.99th=[ 409] 00:22:49.311 bw ( KiB/s): min= 126, max= 640, per=3.47%, avg=451.26, stdev=196.91, samples=19 00:22:49.311 iops : min= 31, max= 160, avg=112.79, stdev=49.27, samples=19 00:22:49.311 lat (msec) : 50=2.84%, 100=32.66%, 250=51.29%, 500=13.20% 00:22:49.311 cpu : usr=40.30%, sys=1.60%, ctx=1145, majf=0, minf=9 00:22:49.311 IO depths : 1=4.8%, 2=9.8%, 4=21.5%, 8=56.2%, 16=7.8%, 32=0.0%, >=64=0.0% 00:22:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 complete : 0=0.0%, 4=93.0%, 8=1.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 issued rwts: total=1197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.311 filename0: (groupid=0, jobs=1): err= 0: pid=98127: Mon Jul 15 20:39:08 2024 00:22:49.311 read: IOPS=130, BW=524KiB/s (536kB/s)(5240KiB/10004msec) 00:22:49.311 slat (usec): min=3, max=7051, avg=17.37, stdev=194.57 00:22:49.311 clat (msec): min=3, max=383, avg=122.03, stdev=68.79 00:22:49.311 lat (msec): min=3, max=383, avg=122.05, stdev=68.80 00:22:49.311 clat percentiles (msec): 00:22:49.311 | 1.00th=[ 4], 5.00th=[ 47], 10.00th=[ 55], 20.00th=[ 72], 00:22:49.311 | 30.00th=[ 85], 40.00th=[ 96], 50.00th=[ 108], 60.00th=[ 121], 00:22:49.311 | 70.00th=[ 136], 80.00th=[ 161], 90.00th=[ 228], 95.00th=[ 275], 00:22:49.311 | 99.00th=[ 330], 99.50th=[ 347], 99.90th=[ 384], 99.95th=[ 384], 00:22:49.311 | 99.99th=[ 384] 00:22:49.311 bw ( KiB/s): min= 176, max= 1112, per=3.82%, avg=497.68, stdev=221.04, samples=19 00:22:49.311 iops : min= 44, max= 278, avg=124.42, stdev=55.26, samples=19 00:22:49.311 lat (msec) : 4=1.15%, 10=0.76%, 20=0.53%, 50=6.34%, 100=36.34% 00:22:49.311 lat (msec) : 250=46.64%, 500=8.24% 00:22:49.311 cpu : usr=34.40%, sys=1.23%, ctx=919, majf=0, minf=9 00:22:49.311 IO depths : 1=1.6%, 2=3.4%, 4=10.8%, 8=72.0%, 16=12.2%, 32=0.0%, >=64=0.0% 00:22:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 complete : 0=0.0%, 4=90.3%, 8=5.4%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 issued rwts: total=1310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.311 filename0: (groupid=0, jobs=1): err= 0: pid=98129: Mon Jul 15 20:39:08 2024 00:22:49.311 read: IOPS=127, BW=511KiB/s (524kB/s)(5132KiB/10038msec) 00:22:49.311 slat (usec): min=7, max=8042, avg=19.69, stdev=224.23 00:22:49.311 clat (msec): min=35, max=410, avg=124.96, stdev=71.44 00:22:49.311 lat (msec): min=35, max=410, avg=124.98, stdev=71.45 00:22:49.311 clat percentiles (msec): 00:22:49.311 | 1.00th=[ 40], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 72], 00:22:49.311 | 30.00th=[ 85], 40.00th=[ 96], 50.00th=[ 104], 60.00th=[ 109], 00:22:49.311 | 70.00th=[ 125], 80.00th=[ 157], 90.00th=[ 222], 95.00th=[ 292], 00:22:49.311 | 99.00th=[ 388], 99.50th=[ 388], 99.90th=[ 409], 99.95th=[ 409], 00:22:49.311 | 99.99th=[ 409] 00:22:49.311 bw ( KiB/s): min= 208, max= 1024, per=3.89%, avg=506.25, stdev=226.58, samples=20 00:22:49.311 iops : min= 52, max= 256, avg=126.55, stdev=56.63, samples=20 00:22:49.311 lat (msec) : 50=3.12%, 100=42.48%, 250=45.44%, 500=8.96% 00:22:49.311 cpu : usr=34.84%, sys=1.28%, ctx=998, majf=0, minf=9 00:22:49.311 IO depths : 1=3.1%, 2=6.7%, 4=15.8%, 8=64.6%, 16=9.7%, 32=0.0%, >=64=0.0% 00:22:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 complete : 0=0.0%, 4=91.8%, 8=2.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 issued rwts: total=1283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.311 filename0: (groupid=0, jobs=1): err= 0: pid=98130: Mon Jul 15 20:39:08 2024 00:22:49.311 read: IOPS=154, BW=618KiB/s (632kB/s)(6196KiB/10032msec) 00:22:49.311 slat (usec): min=3, max=4035, avg=14.97, stdev=102.44 00:22:49.311 clat (msec): min=31, max=354, avg=103.48, stdev=64.39 00:22:49.311 lat (msec): min=31, max=354, avg=103.49, stdev=64.39 00:22:49.311 clat percentiles (msec): 00:22:49.311 | 1.00th=[ 32], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 59], 00:22:49.311 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 88], 00:22:49.311 | 70.00th=[ 107], 80.00th=[ 136], 90.00th=[ 209], 95.00th=[ 251], 00:22:49.311 | 99.00th=[ 334], 99.50th=[ 342], 99.90th=[ 355], 99.95th=[ 355], 00:22:49.311 | 99.99th=[ 355] 00:22:49.311 bw ( KiB/s): min= 208, max= 1200, per=4.71%, avg=613.20, stdev=287.32, samples=20 00:22:49.311 iops : min= 52, max= 300, avg=153.30, stdev=71.83, samples=20 00:22:49.311 lat (msec) : 50=10.78%, 100=56.49%, 250=27.57%, 500=5.16% 00:22:49.311 cpu : usr=41.32%, sys=1.69%, ctx=1072, majf=0, minf=9 00:22:49.311 IO depths : 1=0.3%, 2=0.5%, 4=6.9%, 8=78.9%, 16=13.4%, 32=0.0%, >=64=0.0% 00:22:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 complete : 0=0.0%, 4=89.1%, 8=6.5%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 issued rwts: total=1549,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.311 filename0: (groupid=0, jobs=1): err= 0: pid=98131: Mon Jul 15 20:39:08 2024 00:22:49.311 read: IOPS=148, BW=595KiB/s (610kB/s)(5984KiB/10051msec) 00:22:49.311 slat (usec): min=7, max=8051, avg=21.10, stdev=232.29 00:22:49.311 clat (msec): min=8, max=540, avg=107.26, stdev=78.36 00:22:49.311 lat (msec): min=8, max=540, avg=107.28, stdev=78.36 00:22:49.311 clat percentiles (msec): 00:22:49.311 | 1.00th=[ 9], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 63], 00:22:49.311 | 30.00th=[ 71], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 92], 00:22:49.311 | 70.00th=[ 106], 80.00th=[ 124], 90.00th=[ 213], 95.00th=[ 271], 00:22:49.311 | 99.00th=[ 435], 99.50th=[ 435], 99.90th=[ 542], 99.95th=[ 542], 00:22:49.311 | 99.99th=[ 542] 00:22:49.311 bw ( KiB/s): min= 128, max= 1386, per=4.55%, avg=592.00, stdev=320.01, samples=20 00:22:49.311 iops : min= 32, max= 346, avg=147.95, stdev=79.91, samples=20 00:22:49.311 lat (msec) : 10=1.07%, 20=2.14%, 50=6.89%, 100=57.69%, 250=25.87% 00:22:49.311 lat (msec) : 500=6.02%, 750=0.33% 00:22:49.311 cpu : usr=34.19%, sys=1.33%, ctx=978, majf=0, minf=9 00:22:49.311 IO depths : 1=2.1%, 2=4.6%, 4=13.8%, 8=68.6%, 16=10.8%, 32=0.0%, >=64=0.0% 00:22:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 complete : 0=0.0%, 4=90.8%, 8=4.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 issued rwts: total=1496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.311 filename0: (groupid=0, jobs=1): err= 0: pid=98132: Mon Jul 15 20:39:08 2024 00:22:49.311 read: IOPS=131, BW=526KiB/s (538kB/s)(5280KiB/10042msec) 00:22:49.311 slat (usec): min=7, max=8047, avg=26.08, stdev=271.19 00:22:49.311 clat (msec): min=39, max=458, avg=121.45, stdev=79.32 00:22:49.311 lat (msec): min=39, max=458, avg=121.47, stdev=79.32 00:22:49.311 clat percentiles (msec): 00:22:49.311 | 1.00th=[ 50], 5.00th=[ 58], 10.00th=[ 65], 20.00th=[ 72], 00:22:49.311 | 30.00th=[ 84], 40.00th=[ 87], 50.00th=[ 96], 60.00th=[ 106], 00:22:49.311 | 70.00th=[ 111], 80.00th=[ 128], 90.00th=[ 220], 95.00th=[ 300], 00:22:49.311 | 99.00th=[ 460], 99.50th=[ 460], 99.90th=[ 460], 99.95th=[ 460], 00:22:49.311 | 99.99th=[ 460] 00:22:49.311 bw ( KiB/s): min= 128, max= 896, per=4.00%, avg=521.60, stdev=235.16, samples=20 00:22:49.311 iops : min= 32, max= 224, avg=130.40, stdev=58.79, samples=20 00:22:49.311 lat (msec) : 50=1.59%, 100=51.29%, 250=38.64%, 500=8.48% 00:22:49.311 cpu : usr=41.59%, sys=1.41%, ctx=1245, majf=0, minf=9 00:22:49.311 IO depths : 1=3.3%, 2=7.0%, 4=16.8%, 8=63.3%, 16=9.5%, 32=0.0%, >=64=0.0% 00:22:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 complete : 0=0.0%, 4=91.9%, 8=2.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 issued rwts: total=1320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.311 filename1: (groupid=0, jobs=1): err= 0: pid=98133: Mon Jul 15 20:39:08 2024 00:22:49.311 read: IOPS=152, BW=609KiB/s (623kB/s)(6124KiB/10058msec) 00:22:49.311 slat (usec): min=4, max=11039, avg=29.83, stdev=331.13 00:22:49.311 clat (msec): min=29, max=408, avg=104.85, stdev=63.00 00:22:49.311 lat (msec): min=29, max=408, avg=104.88, stdev=63.00 00:22:49.311 clat percentiles (msec): 00:22:49.311 | 1.00th=[ 30], 5.00th=[ 53], 10.00th=[ 56], 20.00th=[ 66], 00:22:49.311 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 83], 60.00th=[ 88], 00:22:49.311 | 70.00th=[ 105], 80.00th=[ 126], 90.00th=[ 205], 95.00th=[ 232], 00:22:49.311 | 99.00th=[ 384], 99.50th=[ 393], 99.90th=[ 409], 99.95th=[ 409], 00:22:49.311 | 99.99th=[ 409] 00:22:49.311 bw ( KiB/s): min= 208, max= 1017, per=4.65%, avg=605.65, stdev=254.74, samples=20 00:22:49.311 iops : min= 52, max= 254, avg=151.40, stdev=63.66, samples=20 00:22:49.311 lat (msec) : 50=3.59%, 100=64.14%, 250=28.61%, 500=3.66% 00:22:49.311 cpu : usr=40.83%, sys=1.87%, ctx=1712, majf=0, minf=9 00:22:49.311 IO depths : 1=2.0%, 2=4.5%, 4=13.3%, 8=69.0%, 16=11.2%, 32=0.0%, >=64=0.0% 00:22:49.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 complete : 0=0.0%, 4=91.0%, 8=3.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.311 issued rwts: total=1531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.311 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.311 filename1: (groupid=0, jobs=1): err= 0: pid=98135: Mon Jul 15 20:39:08 2024 00:22:49.311 read: IOPS=169, BW=676KiB/s (692kB/s)(6804KiB/10063msec) 00:22:49.311 slat (usec): min=5, max=8042, avg=22.16, stdev=275.00 00:22:49.311 clat (msec): min=6, max=362, avg=94.49, stdev=67.53 00:22:49.311 lat (msec): min=6, max=362, avg=94.51, stdev=67.54 00:22:49.311 clat percentiles (msec): 00:22:49.311 | 1.00th=[ 8], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 55], 00:22:49.311 | 30.00th=[ 63], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 82], 00:22:49.311 | 70.00th=[ 89], 80.00th=[ 107], 90.00th=[ 199], 95.00th=[ 271], 00:22:49.311 | 99.00th=[ 321], 99.50th=[ 334], 99.90th=[ 363], 99.95th=[ 363], 00:22:49.311 | 99.99th=[ 363] 00:22:49.312 bw ( KiB/s): min= 176, max= 1536, per=5.17%, avg=673.55, stdev=354.77, samples=20 00:22:49.312 iops : min= 44, max= 384, avg=168.35, stdev=88.65, samples=20 00:22:49.312 lat (msec) : 10=2.82%, 20=0.94%, 50=11.52%, 100=62.49%, 250=15.64% 00:22:49.312 lat (msec) : 500=6.58% 00:22:49.312 cpu : usr=41.65%, sys=1.87%, ctx=1153, majf=0, minf=9 00:22:49.312 IO depths : 1=1.0%, 2=2.2%, 4=8.8%, 8=75.7%, 16=12.3%, 32=0.0%, >=64=0.0% 00:22:49.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.312 complete : 0=0.0%, 4=89.6%, 8=5.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.312 issued rwts: total=1701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.312 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.312 filename1: (groupid=0, jobs=1): err= 0: pid=98136: Mon Jul 15 20:39:08 2024 00:22:49.312 read: IOPS=161, BW=645KiB/s (661kB/s)(6492KiB/10060msec) 00:22:49.312 slat (usec): min=4, max=8046, avg=25.63, stdev=263.94 00:22:49.312 clat (msec): min=7, max=440, avg=98.84, stdev=63.65 00:22:49.312 lat (msec): min=7, max=441, avg=98.86, stdev=63.66 00:22:49.312 clat percentiles (msec): 00:22:49.312 | 1.00th=[ 9], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 59], 00:22:49.312 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 85], 00:22:49.312 | 70.00th=[ 100], 80.00th=[ 121], 90.00th=[ 207], 95.00th=[ 228], 00:22:49.312 | 99.00th=[ 326], 99.50th=[ 347], 99.90th=[ 443], 99.95th=[ 443], 00:22:49.312 | 99.99th=[ 443] 00:22:49.312 bw ( KiB/s): min= 176, max= 1328, per=4.93%, avg=642.65, stdev=322.84, samples=20 00:22:49.312 iops : min= 44, max= 332, avg=160.65, stdev=80.69, samples=20 00:22:49.312 lat (msec) : 10=2.28%, 20=0.68%, 50=9.30%, 100=57.98%, 250=26.31% 00:22:49.312 lat (msec) : 500=3.45% 00:22:49.312 cpu : usr=33.58%, sys=1.39%, ctx=916, majf=0, minf=9 00:22:49.312 IO depths : 1=1.0%, 2=2.5%, 4=9.9%, 8=74.3%, 16=12.3%, 32=0.0%, >=64=0.0% 00:22:49.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.312 complete : 0=0.0%, 4=90.0%, 8=5.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.312 issued rwts: total=1623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.312 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.312 filename1: (groupid=0, jobs=1): err= 0: pid=98137: Mon Jul 15 20:39:08 2024 00:22:49.312 read: IOPS=124, BW=499KiB/s (511kB/s)(5004KiB/10020msec) 00:22:49.312 slat (nsec): min=7858, max=57728, avg=18440.18, stdev=10843.53 00:22:49.312 clat (msec): min=28, max=425, avg=127.93, stdev=70.87 00:22:49.312 lat (msec): min=28, max=425, avg=127.95, stdev=70.87 00:22:49.312 clat percentiles (msec): 00:22:49.312 | 1.00th=[ 48], 5.00th=[ 61], 10.00th=[ 72], 20.00th=[ 73], 00:22:49.312 | 30.00th=[ 90], 40.00th=[ 97], 50.00th=[ 108], 60.00th=[ 111], 00:22:49.312 | 70.00th=[ 131], 80.00th=[ 159], 90.00th=[ 230], 95.00th=[ 300], 00:22:49.312 | 99.00th=[ 401], 99.50th=[ 426], 99.90th=[ 426], 99.95th=[ 426], 00:22:49.312 | 99.99th=[ 426] 00:22:49.312 bw ( KiB/s): min= 248, max= 792, per=3.80%, avg=494.00, stdev=194.58, samples=20 00:22:49.312 iops : min= 62, max= 198, avg=123.50, stdev=48.64, samples=20 00:22:49.312 lat (msec) : 50=1.84%, 100=38.93%, 250=51.16%, 500=8.07% 00:22:49.312 cpu : usr=31.88%, sys=1.41%, ctx=895, majf=0, minf=9 00:22:49.312 IO depths : 1=3.0%, 2=6.3%, 4=15.2%, 8=65.6%, 16=9.9%, 32=0.0%, >=64=0.0% 00:22:49.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.312 complete : 0=0.0%, 4=91.5%, 8=3.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.312 issued rwts: total=1251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.312 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.312 filename1: (groupid=0, jobs=1): err= 0: pid=98140: Mon Jul 15 20:39:08 2024 00:22:49.312 read: IOPS=152, BW=609KiB/s (623kB/s)(6124KiB/10063msec) 00:22:49.312 slat (usec): min=3, max=8051, avg=21.68, stdev=229.70 00:22:49.312 clat (msec): min=28, max=341, avg=104.96, stdev=56.52 00:22:49.312 lat (msec): min=28, max=341, avg=104.98, stdev=56.53 00:22:49.312 clat percentiles (msec): 00:22:49.312 | 1.00th=[ 29], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 61], 00:22:49.312 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 96], 00:22:49.312 | 70.00th=[ 108], 80.00th=[ 138], 90.00th=[ 201], 95.00th=[ 211], 00:22:49.312 | 99.00th=[ 288], 99.50th=[ 292], 99.90th=[ 342], 99.95th=[ 342], 00:22:49.312 | 99.99th=[ 342] 00:22:49.312 bw ( KiB/s): min= 256, max= 1080, per=4.66%, avg=606.00, stdev=256.75, samples=20 00:22:49.312 iops : min= 64, max= 270, avg=151.50, stdev=64.19, samples=20 00:22:49.312 lat (msec) : 50=6.53%, 100=57.54%, 250=34.23%, 500=1.70% 00:22:49.312 cpu : usr=37.84%, sys=1.31%, ctx=1398, majf=0, minf=9 00:22:49.312 IO depths : 1=0.5%, 2=1.1%, 4=6.9%, 8=78.4%, 16=13.1%, 32=0.0%, >=64=0.0% 00:22:49.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.312 complete : 0=0.0%, 4=89.1%, 8=6.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.312 issued rwts: total=1531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.312 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.312 filename1: (groupid=0, jobs=1): err= 0: pid=98141: Mon Jul 15 20:39:08 2024 00:22:49.312 read: IOPS=135, BW=541KiB/s (554kB/s)(5420KiB/10024msec) 00:22:49.312 slat (usec): min=7, max=8049, avg=32.81, stdev=311.42 00:22:49.312 clat (msec): min=39, max=553, avg=118.00, stdev=81.17 00:22:49.312 lat (msec): min=39, max=553, avg=118.03, stdev=81.16 00:22:49.312 clat percentiles (msec): 00:22:49.312 | 1.00th=[ 46], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 70], 00:22:49.312 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 87], 60.00th=[ 100], 00:22:49.312 | 70.00th=[ 112], 80.00th=[ 155], 90.00th=[ 224], 95.00th=[ 296], 00:22:49.312 | 99.00th=[ 550], 99.50th=[ 550], 99.90th=[ 550], 99.95th=[ 550], 00:22:49.312 | 99.99th=[ 550] 00:22:49.312 bw ( KiB/s): min= 128, max= 944, per=4.14%, avg=539.25, stdev=254.11, samples=20 00:22:49.312 iops : min= 32, max= 236, avg=134.80, stdev=63.53, samples=20 00:22:49.312 lat (msec) : 50=2.58%, 100=57.49%, 250=31.66%, 500=7.08%, 750=1.18% 00:22:49.312 cpu : usr=34.57%, sys=1.40%, ctx=955, majf=0, minf=9 00:22:49.312 IO depths : 1=3.0%, 2=6.4%, 4=16.4%, 8=64.4%, 16=9.7%, 32=0.0%, >=64=0.0% 00:22:49.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.312 complete : 0=0.0%, 4=91.5%, 8=3.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.312 issued rwts: total=1355,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.312 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.312 filename1: (groupid=0, jobs=1): err= 0: pid=98142: Mon Jul 15 20:39:08 2024 00:22:49.312 read: IOPS=120, BW=484KiB/s (496kB/s)(4840KiB/10001msec) 00:22:49.312 slat (usec): min=7, max=8025, avg=29.88, stdev=345.99 00:22:49.312 clat (usec): min=1470, max=467463, avg=132021.39, stdev=89783.42 00:22:49.312 lat (usec): min=1482, max=467491, avg=132051.27, stdev=89778.13 00:22:49.312 clat percentiles (usec): 00:22:49.312 | 1.00th=[ 1549], 5.00th=[ 30016], 10.00th=[ 68682], 20.00th=[ 76022], 00:22:49.312 | 30.00th=[ 84411], 40.00th=[ 95945], 50.00th=[107480], 60.00th=[113771], 00:22:49.312 | 70.00th=[130548], 80.00th=[164627], 90.00th=[304088], 95.00th=[312476], 00:22:49.312 | 99.00th=[467665], 99.50th=[467665], 99.90th=[467665], 99.95th=[467665], 00:22:49.312 | 99.99th=[467665] 00:22:49.312 bw ( KiB/s): min= 128, max= 768, per=3.41%, avg=444.16, stdev=195.98, samples=19 00:22:49.312 iops : min= 32, max= 192, avg=111.00, stdev=48.98, samples=19 00:22:49.312 lat (msec) : 2=1.32%, 4=2.89%, 10=0.58%, 50=0.50%, 100=39.83% 00:22:49.312 lat (msec) : 250=41.07%, 500=13.80% 00:22:49.312 cpu : usr=34.46%, sys=1.58%, ctx=1030, majf=0, minf=9 00:22:49.312 IO depths : 1=3.2%, 2=6.9%, 4=16.6%, 8=63.6%, 16=9.8%, 32=0.0%, >=64=0.0% 00:22:49.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.312 complete : 0=0.0%, 4=91.9%, 8=2.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.312 issued rwts: total=1210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.312 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.312 filename1: (groupid=0, jobs=1): err= 0: pid=98143: Mon Jul 15 20:39:08 2024 00:22:49.312 read: IOPS=120, BW=480KiB/s (492kB/s)(4812KiB/10016msec) 00:22:49.312 slat (usec): min=7, max=8055, avg=37.61, stdev=400.91 00:22:49.312 clat (msec): min=16, max=466, avg=132.85, stdev=82.85 00:22:49.313 lat (msec): min=16, max=466, avg=132.89, stdev=82.84 00:22:49.313 clat percentiles (msec): 00:22:49.313 | 1.00th=[ 48], 5.00th=[ 61], 10.00th=[ 72], 20.00th=[ 77], 00:22:49.313 | 30.00th=[ 85], 40.00th=[ 96], 50.00th=[ 106], 60.00th=[ 111], 00:22:49.313 | 70.00th=[ 131], 80.00th=[ 157], 90.00th=[ 296], 95.00th=[ 305], 00:22:49.313 | 99.00th=[ 468], 99.50th=[ 468], 99.90th=[ 468], 99.95th=[ 468], 00:22:49.313 | 99.99th=[ 468] 00:22:49.313 bw ( KiB/s): min= 128, max= 768, per=3.54%, avg=461.21, stdev=208.36, samples=19 00:22:49.313 iops : min= 32, max= 192, avg=115.26, stdev=52.08, samples=19 00:22:49.313 lat (msec) : 20=0.91%, 50=1.66%, 100=40.07%, 250=44.64%, 500=12.72% 00:22:49.313 cpu : usr=31.94%, sys=1.37%, ctx=893, majf=0, minf=9 00:22:49.313 IO depths : 1=2.9%, 2=6.6%, 4=17.4%, 8=62.9%, 16=10.2%, 32=0.0%, >=64=0.0% 00:22:49.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.313 complete : 0=0.0%, 4=91.9%, 8=2.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.313 issued rwts: total=1203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.313 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.313 filename2: (groupid=0, jobs=1): err= 0: pid=98144: Mon Jul 15 20:39:08 2024 00:22:49.313 read: IOPS=148, BW=595KiB/s (609kB/s)(5964KiB/10028msec) 00:22:49.313 slat (nsec): min=6599, max=49826, avg=14467.28, stdev=8449.34 00:22:49.313 clat (msec): min=33, max=341, avg=107.44, stdev=68.46 00:22:49.313 lat (msec): min=33, max=341, avg=107.45, stdev=68.47 00:22:49.313 clat percentiles (msec): 00:22:49.313 | 1.00th=[ 44], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 61], 00:22:49.313 | 30.00th=[ 67], 40.00th=[ 73], 50.00th=[ 81], 60.00th=[ 86], 00:22:49.313 | 70.00th=[ 108], 80.00th=[ 148], 90.00th=[ 220], 95.00th=[ 275], 00:22:49.313 | 99.00th=[ 321], 99.50th=[ 334], 99.90th=[ 342], 99.95th=[ 342], 00:22:49.313 | 99.99th=[ 342] 00:22:49.313 bw ( KiB/s): min= 224, max= 1111, per=4.54%, avg=591.80, stdev=288.27, samples=20 00:22:49.313 iops : min= 56, max= 277, avg=147.90, stdev=72.01, samples=20 00:22:49.313 lat (msec) : 50=7.78%, 100=59.02%, 250=26.76%, 500=6.44% 00:22:49.313 cpu : usr=41.02%, sys=1.52%, ctx=1368, majf=0, minf=9 00:22:49.313 IO depths : 1=0.9%, 2=2.2%, 4=8.7%, 8=75.7%, 16=12.4%, 32=0.0%, >=64=0.0% 00:22:49.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.313 complete : 0=0.0%, 4=89.8%, 8=5.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.313 issued rwts: total=1491,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.313 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.313 filename2: (groupid=0, jobs=1): err= 0: pid=98145: Mon Jul 15 20:39:08 2024 00:22:49.313 read: IOPS=123, BW=494KiB/s (506kB/s)(4952KiB/10023msec) 00:22:49.313 slat (usec): min=7, max=8024, avg=19.79, stdev=227.78 00:22:49.313 clat (msec): min=29, max=401, avg=129.37, stdev=77.50 00:22:49.313 lat (msec): min=29, max=401, avg=129.39, stdev=77.51 00:22:49.313 clat percentiles (msec): 00:22:49.313 | 1.00th=[ 48], 5.00th=[ 61], 10.00th=[ 71], 20.00th=[ 77], 00:22:49.313 | 30.00th=[ 85], 40.00th=[ 96], 50.00th=[ 108], 60.00th=[ 111], 00:22:49.313 | 70.00th=[ 120], 80.00th=[ 144], 90.00th=[ 275], 95.00th=[ 313], 00:22:49.313 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 401], 99.95th=[ 401], 00:22:49.313 | 99.99th=[ 401] 00:22:49.313 bw ( KiB/s): min= 128, max= 848, per=3.75%, avg=488.65, stdev=210.30, samples=20 00:22:49.313 iops : min= 32, max= 212, avg=122.15, stdev=52.57, samples=20 00:22:49.313 lat (msec) : 50=1.37%, 100=43.78%, 250=41.60%, 500=13.25% 00:22:49.313 cpu : usr=38.93%, sys=1.60%, ctx=984, majf=0, minf=9 00:22:49.313 IO depths : 1=2.1%, 2=4.8%, 4=14.4%, 8=68.0%, 16=10.7%, 32=0.0%, >=64=0.0% 00:22:49.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.313 complete : 0=0.0%, 4=91.0%, 8=3.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.313 issued rwts: total=1238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.313 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.313 filename2: (groupid=0, jobs=1): err= 0: pid=98146: Mon Jul 15 20:39:08 2024 00:22:49.313 read: IOPS=125, BW=502KiB/s (514kB/s)(5024KiB/10015msec) 00:22:49.313 slat (usec): min=4, max=4042, avg=17.35, stdev=113.84 00:22:49.313 clat (msec): min=46, max=401, avg=127.31, stdev=77.61 00:22:49.313 lat (msec): min=46, max=401, avg=127.32, stdev=77.61 00:22:49.313 clat percentiles (msec): 00:22:49.313 | 1.00th=[ 51], 5.00th=[ 59], 10.00th=[ 66], 20.00th=[ 72], 00:22:49.313 | 30.00th=[ 80], 40.00th=[ 90], 50.00th=[ 107], 60.00th=[ 109], 00:22:49.313 | 70.00th=[ 123], 80.00th=[ 155], 90.00th=[ 284], 95.00th=[ 317], 00:22:49.313 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 401], 99.95th=[ 401], 00:22:49.313 | 99.99th=[ 401] 00:22:49.313 bw ( KiB/s): min= 128, max= 944, per=3.85%, avg=501.45, stdev=237.33, samples=20 00:22:49.313 iops : min= 32, max= 236, avg=125.35, stdev=59.32, samples=20 00:22:49.313 lat (msec) : 50=0.64%, 100=47.53%, 250=41.64%, 500=10.19% 00:22:49.313 cpu : usr=35.94%, sys=1.60%, ctx=997, majf=0, minf=9 00:22:49.313 IO depths : 1=3.3%, 2=7.2%, 4=17.8%, 8=62.2%, 16=9.6%, 32=0.0%, >=64=0.0% 00:22:49.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.313 complete : 0=0.0%, 4=92.0%, 8=2.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.313 issued rwts: total=1256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.313 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.313 filename2: (groupid=0, jobs=1): err= 0: pid=98147: Mon Jul 15 20:39:08 2024 00:22:49.313 read: IOPS=129, BW=520KiB/s (532kB/s)(5216KiB/10033msec) 00:22:49.313 slat (usec): min=7, max=4036, avg=24.02, stdev=111.71 00:22:49.313 clat (msec): min=29, max=405, avg=122.87, stdev=76.79 00:22:49.313 lat (msec): min=29, max=405, avg=122.89, stdev=76.80 00:22:49.313 clat percentiles (msec): 00:22:49.313 | 1.00th=[ 31], 5.00th=[ 53], 10.00th=[ 60], 20.00th=[ 72], 00:22:49.313 | 30.00th=[ 83], 40.00th=[ 88], 50.00th=[ 99], 60.00th=[ 108], 00:22:49.313 | 70.00th=[ 121], 80.00th=[ 136], 90.00th=[ 259], 95.00th=[ 305], 00:22:49.313 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:22:49.313 | 99.99th=[ 405] 00:22:49.313 bw ( KiB/s): min= 128, max= 896, per=3.96%, avg=515.20, stdev=242.87, samples=20 00:22:49.313 iops : min= 32, max= 224, avg=128.80, stdev=60.72, samples=20 00:22:49.313 lat (msec) : 50=3.68%, 100=48.16%, 250=37.27%, 500=10.89% 00:22:49.313 cpu : usr=36.51%, sys=1.49%, ctx=1028, majf=0, minf=9 00:22:49.313 IO depths : 1=2.5%, 2=5.8%, 4=15.6%, 8=65.5%, 16=10.6%, 32=0.0%, >=64=0.0% 00:22:49.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.313 complete : 0=0.0%, 4=91.5%, 8=3.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.313 issued rwts: total=1304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.313 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.313 filename2: (groupid=0, jobs=1): err= 0: pid=98148: Mon Jul 15 20:39:08 2024 00:22:49.313 read: IOPS=148, BW=593KiB/s (607kB/s)(5944KiB/10032msec) 00:22:49.313 slat (usec): min=7, max=10056, avg=37.83, stdev=393.68 00:22:49.313 clat (msec): min=31, max=335, avg=107.64, stdev=56.05 00:22:49.313 lat (msec): min=31, max=335, avg=107.67, stdev=56.04 00:22:49.313 clat percentiles (msec): 00:22:49.313 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 64], 00:22:49.313 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 87], 60.00th=[ 100], 00:22:49.313 | 70.00th=[ 111], 80.00th=[ 155], 90.00th=[ 203], 95.00th=[ 215], 00:22:49.313 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 338], 99.95th=[ 338], 00:22:49.313 | 99.99th=[ 338] 00:22:49.313 bw ( KiB/s): min= 256, max= 944, per=4.52%, avg=588.00, stdev=225.01, samples=20 00:22:49.313 iops : min= 64, max= 236, avg=147.00, stdev=56.25, samples=20 00:22:49.313 lat (msec) : 50=7.07%, 100=53.30%, 250=37.89%, 500=1.75% 00:22:49.313 cpu : usr=33.83%, sys=1.48%, ctx=1012, majf=0, minf=9 00:22:49.313 IO depths : 1=0.1%, 2=0.4%, 4=6.0%, 8=79.5%, 16=14.0%, 32=0.0%, >=64=0.0% 00:22:49.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.313 complete : 0=0.0%, 4=89.4%, 8=6.4%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.313 issued rwts: total=1486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.313 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.313 filename2: (groupid=0, jobs=1): err= 0: pid=98149: Mon Jul 15 20:39:08 2024 00:22:49.313 read: IOPS=116, BW=465KiB/s (476kB/s)(4664KiB/10026msec) 00:22:49.313 slat (usec): min=3, max=4022, avg=22.32, stdev=194.30 00:22:49.313 clat (msec): min=25, max=514, avg=137.36, stdev=83.55 00:22:49.313 lat (msec): min=25, max=514, avg=137.38, stdev=83.55 00:22:49.313 clat percentiles (msec): 00:22:49.313 | 1.00th=[ 27], 5.00th=[ 65], 10.00th=[ 72], 20.00th=[ 82], 00:22:49.313 | 30.00th=[ 94], 40.00th=[ 104], 50.00th=[ 108], 60.00th=[ 117], 00:22:49.313 | 70.00th=[ 131], 80.00th=[ 157], 90.00th=[ 300], 95.00th=[ 317], 00:22:49.313 | 99.00th=[ 439], 99.50th=[ 439], 99.90th=[ 514], 99.95th=[ 514], 00:22:49.313 | 99.99th=[ 514] 00:22:49.313 bw ( KiB/s): min= 128, max= 768, per=3.53%, avg=459.80, stdev=217.98, samples=20 00:22:49.313 iops : min= 32, max= 192, avg=114.95, stdev=54.50, samples=20 00:22:49.313 lat (msec) : 50=1.37%, 100=36.02%, 250=47.51%, 500=14.67%, 750=0.43% 00:22:49.313 cpu : usr=42.43%, sys=1.76%, ctx=1181, majf=0, minf=9 00:22:49.313 IO depths : 1=4.8%, 2=9.9%, 4=22.0%, 8=55.7%, 16=7.6%, 32=0.0%, >=64=0.0% 00:22:49.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.313 complete : 0=0.0%, 4=93.2%, 8=1.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.313 issued rwts: total=1166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.313 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.313 filename2: (groupid=0, jobs=1): err= 0: pid=98150: Mon Jul 15 20:39:08 2024 00:22:49.313 read: IOPS=115, BW=460KiB/s (471kB/s)(4608KiB/10013msec) 00:22:49.313 slat (usec): min=4, max=8050, avg=38.42, stdev=392.45 00:22:49.313 clat (msec): min=32, max=399, avg=138.79, stdev=80.90 00:22:49.313 lat (msec): min=32, max=399, avg=138.83, stdev=80.89 00:22:49.313 clat percentiles (msec): 00:22:49.313 | 1.00th=[ 40], 5.00th=[ 70], 10.00th=[ 77], 20.00th=[ 84], 00:22:49.313 | 30.00th=[ 95], 40.00th=[ 106], 50.00th=[ 112], 60.00th=[ 118], 00:22:49.313 | 70.00th=[ 129], 80.00th=[ 163], 90.00th=[ 300], 95.00th=[ 317], 00:22:49.313 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 401], 99.95th=[ 401], 00:22:49.313 | 99.99th=[ 401] 00:22:49.313 bw ( KiB/s): min= 128, max= 768, per=3.49%, avg=454.40, stdev=200.15, samples=20 00:22:49.313 iops : min= 32, max= 192, avg=113.60, stdev=50.04, samples=20 00:22:49.313 lat (msec) : 50=1.65%, 100=32.38%, 250=51.48%, 500=14.50% 00:22:49.313 cpu : usr=41.07%, sys=1.80%, ctx=1196, majf=0, minf=9 00:22:49.313 IO depths : 1=3.8%, 2=8.3%, 4=19.4%, 8=59.5%, 16=8.9%, 32=0.0%, >=64=0.0% 00:22:49.313 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.313 complete : 0=0.0%, 4=92.5%, 8=1.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.313 issued rwts: total=1152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.313 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.313 filename2: (groupid=0, jobs=1): err= 0: pid=98151: Mon Jul 15 20:39:08 2024 00:22:49.313 read: IOPS=115, BW=461KiB/s (472kB/s)(4608KiB/10002msec) 00:22:49.313 slat (usec): min=7, max=8048, avg=32.45, stdev=334.54 00:22:49.313 clat (msec): min=17, max=463, avg=138.72, stdev=87.35 00:22:49.313 lat (msec): min=17, max=463, avg=138.75, stdev=87.37 00:22:49.314 clat percentiles (msec): 00:22:49.314 | 1.00th=[ 32], 5.00th=[ 61], 10.00th=[ 74], 20.00th=[ 84], 00:22:49.314 | 30.00th=[ 96], 40.00th=[ 105], 50.00th=[ 109], 60.00th=[ 120], 00:22:49.314 | 70.00th=[ 136], 80.00th=[ 155], 90.00th=[ 300], 95.00th=[ 309], 00:22:49.314 | 99.00th=[ 464], 99.50th=[ 464], 99.90th=[ 464], 99.95th=[ 464], 00:22:49.314 | 99.99th=[ 464] 00:22:49.314 bw ( KiB/s): min= 128, max= 768, per=3.36%, avg=437.89, stdev=186.64, samples=19 00:22:49.314 iops : min= 32, max= 192, avg=109.47, stdev=46.66, samples=19 00:22:49.314 lat (msec) : 20=0.43%, 50=1.22%, 100=33.25%, 250=52.60%, 500=12.50% 00:22:49.314 cpu : usr=33.08%, sys=1.12%, ctx=928, majf=0, minf=9 00:22:49.314 IO depths : 1=3.2%, 2=7.6%, 4=19.7%, 8=60.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:22:49.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.314 complete : 0=0.0%, 4=92.7%, 8=1.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.314 issued rwts: total=1152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.314 latency : target=0, window=0, percentile=100.00%, depth=16 00:22:49.314 00:22:49.314 Run status group 0 (all jobs): 00:22:49.314 READ: bw=12.7MiB/s (13.3MB/s), 460KiB/s-676KiB/s (471kB/s-692kB/s), io=128MiB (134MB), run=10001-10063msec 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:49.314 bdev_null0 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:49.314 [2024-07-15 20:39:08.898180] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:49.314 bdev_null1 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:49.314 { 00:22:49.314 "params": { 00:22:49.314 "name": "Nvme$subsystem", 00:22:49.314 "trtype": "$TEST_TRANSPORT", 00:22:49.314 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:49.314 "adrfam": "ipv4", 00:22:49.314 "trsvcid": "$NVMF_PORT", 00:22:49.314 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:49.314 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:49.314 "hdgst": ${hdgst:-false}, 00:22:49.314 "ddgst": ${ddgst:-false} 00:22:49.314 }, 00:22:49.314 "method": "bdev_nvme_attach_controller" 00:22:49.314 } 00:22:49.314 EOF 00:22:49.314 )") 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:49.314 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:49.315 { 00:22:49.315 "params": { 00:22:49.315 "name": "Nvme$subsystem", 00:22:49.315 "trtype": "$TEST_TRANSPORT", 00:22:49.315 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:49.315 "adrfam": "ipv4", 00:22:49.315 "trsvcid": "$NVMF_PORT", 00:22:49.315 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:49.315 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:49.315 "hdgst": ${hdgst:-false}, 00:22:49.315 "ddgst": ${ddgst:-false} 00:22:49.315 }, 00:22:49.315 "method": "bdev_nvme_attach_controller" 00:22:49.315 } 00:22:49.315 EOF 00:22:49.315 )") 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:49.315 "params": { 00:22:49.315 "name": "Nvme0", 00:22:49.315 "trtype": "tcp", 00:22:49.315 "traddr": "10.0.0.2", 00:22:49.315 "adrfam": "ipv4", 00:22:49.315 "trsvcid": "4420", 00:22:49.315 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:49.315 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:49.315 "hdgst": false, 00:22:49.315 "ddgst": false 00:22:49.315 }, 00:22:49.315 "method": "bdev_nvme_attach_controller" 00:22:49.315 },{ 00:22:49.315 "params": { 00:22:49.315 "name": "Nvme1", 00:22:49.315 "trtype": "tcp", 00:22:49.315 "traddr": "10.0.0.2", 00:22:49.315 "adrfam": "ipv4", 00:22:49.315 "trsvcid": "4420", 00:22:49.315 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:49.315 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:49.315 "hdgst": false, 00:22:49.315 "ddgst": false 00:22:49.315 }, 00:22:49.315 "method": "bdev_nvme_attach_controller" 00:22:49.315 }' 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:49.315 20:39:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:49.315 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:49.315 ... 00:22:49.315 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:22:49.315 ... 00:22:49.315 fio-3.35 00:22:49.315 Starting 4 threads 00:22:53.495 00:22:53.495 filename0: (groupid=0, jobs=1): err= 0: pid=98268: Mon Jul 15 20:39:14 2024 00:22:53.495 read: IOPS=1672, BW=13.1MiB/s (13.7MB/s)(65.4MiB/5004msec) 00:22:53.495 slat (nsec): min=4747, max=52935, avg=10751.37, stdev=4399.09 00:22:53.495 clat (usec): min=3035, max=7855, avg=4728.51, stdev=679.12 00:22:53.495 lat (usec): min=3047, max=7863, avg=4739.26, stdev=679.25 00:22:53.495 clat percentiles (usec): 00:22:53.495 | 1.00th=[ 3621], 5.00th=[ 4146], 10.00th=[ 4178], 20.00th=[ 4228], 00:22:53.495 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4686], 00:22:53.495 | 70.00th=[ 5014], 80.00th=[ 5342], 90.00th=[ 5800], 95.00th=[ 5997], 00:22:53.495 | 99.00th=[ 6325], 99.50th=[ 6718], 99.90th=[ 7504], 99.95th=[ 7701], 00:22:53.495 | 99.99th=[ 7832] 00:22:53.495 bw ( KiB/s): min=11136, max=14848, per=25.01%, avg=13378.70, stdev=1209.50, samples=10 00:22:53.495 iops : min= 1392, max= 1856, avg=1672.30, stdev=151.18, samples=10 00:22:53.495 lat (msec) : 4=1.28%, 10=98.72% 00:22:53.495 cpu : usr=92.24%, sys=6.10%, ctx=5, majf=0, minf=0 00:22:53.495 IO depths : 1=9.1%, 2=25.0%, 4=50.0%, 8=15.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:53.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.495 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.495 issued rwts: total=8368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.495 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:53.495 filename0: (groupid=0, jobs=1): err= 0: pid=98269: Mon Jul 15 20:39:14 2024 00:22:53.495 read: IOPS=1670, BW=13.1MiB/s (13.7MB/s)(65.3MiB/5004msec) 00:22:53.495 slat (nsec): min=4899, max=64760, avg=11057.41, stdev=5021.33 00:22:53.495 clat (usec): min=2208, max=9159, avg=4733.25, stdev=750.17 00:22:53.495 lat (usec): min=2221, max=9167, avg=4744.30, stdev=749.68 00:22:53.495 clat percentiles (usec): 00:22:53.495 | 1.00th=[ 3064], 5.00th=[ 4146], 10.00th=[ 4178], 20.00th=[ 4228], 00:22:53.495 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4752], 00:22:53.495 | 70.00th=[ 5014], 80.00th=[ 5342], 90.00th=[ 5866], 95.00th=[ 5997], 00:22:53.495 | 99.00th=[ 7046], 99.50th=[ 7963], 99.90th=[ 8848], 99.95th=[ 8979], 00:22:53.495 | 99.99th=[ 9110] 00:22:53.495 bw ( KiB/s): min=11152, max=14864, per=24.99%, avg=13367.11, stdev=1278.53, samples=9 00:22:53.495 iops : min= 1394, max= 1858, avg=1670.89, stdev=159.82, samples=9 00:22:53.495 lat (msec) : 4=1.48%, 10=98.52% 00:22:53.495 cpu : usr=92.62%, sys=5.64%, ctx=198, majf=0, minf=0 00:22:53.495 IO depths : 1=7.8%, 2=25.0%, 4=50.0%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:53.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.495 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.495 issued rwts: total=8360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.495 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:53.495 filename1: (groupid=0, jobs=1): err= 0: pid=98270: Mon Jul 15 20:39:14 2024 00:22:53.495 read: IOPS=1672, BW=13.1MiB/s (13.7MB/s)(65.4MiB/5003msec) 00:22:53.495 slat (nsec): min=4648, max=51520, avg=16061.27, stdev=6411.05 00:22:53.495 clat (usec): min=2967, max=7650, avg=4696.95, stdev=653.89 00:22:53.495 lat (usec): min=2971, max=7666, avg=4713.02, stdev=654.05 00:22:53.495 clat percentiles (usec): 00:22:53.495 | 1.00th=[ 4015], 5.00th=[ 4113], 10.00th=[ 4146], 20.00th=[ 4178], 00:22:53.495 | 30.00th=[ 4228], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4686], 00:22:53.495 | 70.00th=[ 5014], 80.00th=[ 5276], 90.00th=[ 5800], 95.00th=[ 5997], 00:22:53.495 | 99.00th=[ 6194], 99.50th=[ 6259], 99.90th=[ 7308], 99.95th=[ 7439], 00:22:53.495 | 99.99th=[ 7635] 00:22:53.495 bw ( KiB/s): min=11136, max=14848, per=25.02%, avg=13383.11, stdev=1289.74, samples=9 00:22:53.495 iops : min= 1392, max= 1856, avg=1672.89, stdev=161.22, samples=9 00:22:53.495 lat (msec) : 4=0.79%, 10=99.21% 00:22:53.495 cpu : usr=91.76%, sys=6.12%, ctx=5, majf=0, minf=0 00:22:53.495 IO depths : 1=11.5%, 2=25.0%, 4=50.0%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:53.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.495 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.495 issued rwts: total=8368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.495 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:53.495 filename1: (groupid=0, jobs=1): err= 0: pid=98271: Mon Jul 15 20:39:14 2024 00:22:53.495 read: IOPS=1670, BW=13.1MiB/s (13.7MB/s)(65.3MiB/5003msec) 00:22:53.495 slat (nsec): min=3886, max=86835, avg=13814.33, stdev=4524.91 00:22:53.495 clat (usec): min=2652, max=10656, avg=4722.39, stdev=713.98 00:22:53.495 lat (usec): min=2667, max=10669, avg=4736.20, stdev=713.84 00:22:53.495 clat percentiles (usec): 00:22:53.495 | 1.00th=[ 3654], 5.00th=[ 4113], 10.00th=[ 4146], 20.00th=[ 4178], 00:22:53.495 | 30.00th=[ 4228], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4686], 00:22:53.495 | 70.00th=[ 5014], 80.00th=[ 5342], 90.00th=[ 5866], 95.00th=[ 6063], 00:22:53.495 | 99.00th=[ 6652], 99.50th=[ 7373], 99.90th=[ 8455], 99.95th=[ 8848], 00:22:53.495 | 99.99th=[10683] 00:22:53.495 bw ( KiB/s): min=11136, max=14976, per=25.01%, avg=13378.70, stdev=1214.47, samples=10 00:22:53.495 iops : min= 1392, max= 1872, avg=1672.30, stdev=151.80, samples=10 00:22:53.495 lat (msec) : 4=1.76%, 10=98.23%, 20=0.01% 00:22:53.495 cpu : usr=92.62%, sys=5.88%, ctx=10, majf=0, minf=9 00:22:53.495 IO depths : 1=7.2%, 2=25.0%, 4=50.0%, 8=17.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:53.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.496 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:53.496 issued rwts: total=8360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:53.496 latency : target=0, window=0, percentile=100.00%, depth=8 00:22:53.496 00:22:53.496 Run status group 0 (all jobs): 00:22:53.496 READ: bw=52.2MiB/s (54.8MB/s), 13.1MiB/s-13.1MiB/s (13.7MB/s-13.7MB/s), io=261MiB (274MB), run=5003-5004msec 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.496 00:22:53.496 real 0m23.321s 00:22:53.496 user 2m3.665s 00:22:53.496 sys 0m6.427s 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:53.496 ************************************ 00:22:53.496 END TEST fio_dif_rand_params 00:22:53.496 ************************************ 00:22:53.496 20:39:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:22:53.496 20:39:14 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:22:53.496 20:39:14 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:22:53.496 20:39:14 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:53.496 20:39:14 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:53.496 20:39:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:22:53.496 ************************************ 00:22:53.496 START TEST fio_dif_digest 00:22:53.496 ************************************ 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:53.496 bdev_null0 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.496 20:39:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:53.754 20:39:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.754 20:39:14 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:53.754 20:39:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.754 20:39:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:22:53.754 [2024-07-15 20:39:15.000177] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:53.754 { 00:22:53.754 "params": { 00:22:53.754 "name": "Nvme$subsystem", 00:22:53.754 "trtype": "$TEST_TRANSPORT", 00:22:53.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.754 "adrfam": "ipv4", 00:22:53.754 "trsvcid": "$NVMF_PORT", 00:22:53.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.754 "hdgst": ${hdgst:-false}, 00:22:53.754 "ddgst": ${ddgst:-false} 00:22:53.754 }, 00:22:53.754 "method": "bdev_nvme_attach_controller" 00:22:53.754 } 00:22:53.754 EOF 00:22:53.754 )") 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:53.754 "params": { 00:22:53.754 "name": "Nvme0", 00:22:53.754 "trtype": "tcp", 00:22:53.754 "traddr": "10.0.0.2", 00:22:53.754 "adrfam": "ipv4", 00:22:53.754 "trsvcid": "4420", 00:22:53.754 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:53.754 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:53.754 "hdgst": true, 00:22:53.754 "ddgst": true 00:22:53.754 }, 00:22:53.754 "method": "bdev_nvme_attach_controller" 00:22:53.754 }' 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:53.754 20:39:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:22:53.754 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:22:53.754 ... 00:22:53.754 fio-3.35 00:22:53.754 Starting 3 threads 00:23:05.951 00:23:05.951 filename0: (groupid=0, jobs=1): err= 0: pid=98377: Mon Jul 15 20:39:25 2024 00:23:05.951 read: IOPS=157, BW=19.6MiB/s (20.6MB/s)(197MiB/10007msec) 00:23:05.951 slat (nsec): min=5504, max=66184, avg=18497.06, stdev=6582.53 00:23:05.951 clat (usec): min=8894, max=35441, avg=19060.46, stdev=2770.73 00:23:05.951 lat (usec): min=8902, max=35483, avg=19078.95, stdev=2771.73 00:23:05.951 clat percentiles (usec): 00:23:05.951 | 1.00th=[12125], 5.00th=[16057], 10.00th=[16581], 20.00th=[17433], 00:23:05.951 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18482], 60.00th=[19006], 00:23:05.951 | 70.00th=[19530], 80.00th=[20317], 90.00th=[22676], 95.00th=[24249], 00:23:05.951 | 99.00th=[28967], 99.50th=[31589], 99.90th=[35390], 99.95th=[35390], 00:23:05.951 | 99.99th=[35390] 00:23:05.951 bw ( KiB/s): min=16384, max=22528, per=28.09%, avg=20116.21, stdev=1639.90, samples=19 00:23:05.951 iops : min= 128, max= 176, avg=157.16, stdev=12.81, samples=19 00:23:05.951 lat (msec) : 10=0.19%, 20=76.67%, 50=23.14% 00:23:05.951 cpu : usr=92.12%, sys=6.37%, ctx=10, majf=0, minf=9 00:23:05.951 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:05.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.951 issued rwts: total=1573,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.951 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:05.951 filename0: (groupid=0, jobs=1): err= 0: pid=98378: Mon Jul 15 20:39:25 2024 00:23:05.951 read: IOPS=189, BW=23.7MiB/s (24.8MB/s)(238MiB/10046msec) 00:23:05.951 slat (usec): min=8, max=106, avg=17.17, stdev= 6.77 00:23:05.951 clat (usec): min=7938, max=52003, avg=15806.86, stdev=2666.93 00:23:05.951 lat (usec): min=7958, max=52018, avg=15824.03, stdev=2667.83 00:23:05.951 clat percentiles (usec): 00:23:05.951 | 1.00th=[10028], 5.00th=[13042], 10.00th=[13435], 20.00th=[14091], 00:23:05.951 | 30.00th=[14484], 40.00th=[14877], 50.00th=[15270], 60.00th=[15664], 00:23:05.951 | 70.00th=[16319], 80.00th=[17433], 90.00th=[19006], 95.00th=[20317], 00:23:05.951 | 99.00th=[24511], 99.50th=[25822], 99.90th=[50594], 99.95th=[52167], 00:23:05.951 | 99.99th=[52167] 00:23:05.951 bw ( KiB/s): min=18432, max=27392, per=33.95%, avg=24307.20, stdev=2370.37, samples=20 00:23:05.951 iops : min= 144, max= 214, avg=189.90, stdev=18.52, samples=20 00:23:05.951 lat (msec) : 10=1.05%, 20=92.90%, 50=5.94%, 100=0.11% 00:23:05.951 cpu : usr=92.63%, sys=5.68%, ctx=61, majf=0, minf=9 00:23:05.951 IO depths : 1=2.8%, 2=97.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:05.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.951 issued rwts: total=1901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.951 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:05.951 filename0: (groupid=0, jobs=1): err= 0: pid=98379: Mon Jul 15 20:39:25 2024 00:23:05.951 read: IOPS=213, BW=26.7MiB/s (28.0MB/s)(268MiB/10042msec) 00:23:05.951 slat (nsec): min=3911, max=82051, avg=17819.25, stdev=6157.71 00:23:05.951 clat (usec): min=9456, max=56245, avg=13996.54, stdev=3317.04 00:23:05.951 lat (usec): min=9469, max=56261, avg=14014.36, stdev=3317.86 00:23:05.951 clat percentiles (usec): 00:23:05.951 | 1.00th=[10814], 5.00th=[11600], 10.00th=[11863], 20.00th=[12387], 00:23:05.951 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13173], 60.00th=[13566], 00:23:05.951 | 70.00th=[13960], 80.00th=[15008], 90.00th=[17171], 95.00th=[18482], 00:23:05.951 | 99.00th=[23987], 99.50th=[25297], 99.90th=[54789], 99.95th=[54789], 00:23:05.951 | 99.99th=[56361] 00:23:05.951 bw ( KiB/s): min=21504, max=29952, per=38.32%, avg=27443.20, stdev=2517.34, samples=20 00:23:05.951 iops : min= 168, max= 234, avg=214.40, stdev=19.67, samples=20 00:23:05.951 lat (msec) : 10=0.09%, 20=97.86%, 50=1.68%, 100=0.37% 00:23:05.951 cpu : usr=91.87%, sys=6.40%, ctx=11, majf=0, minf=0 00:23:05.951 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:05.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.951 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:05.951 issued rwts: total=2146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:05.951 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:05.951 00:23:05.951 Run status group 0 (all jobs): 00:23:05.951 READ: bw=69.9MiB/s (73.3MB/s), 19.6MiB/s-26.7MiB/s (20.6MB/s-28.0MB/s), io=703MiB (737MB), run=10007-10046msec 00:23:05.951 20:39:25 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:23:05.951 20:39:25 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:23:05.951 20:39:25 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:23:05.951 20:39:25 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:05.951 20:39:25 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:23:05.951 20:39:25 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:05.951 20:39:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.951 20:39:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:05.951 20:39:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.951 20:39:25 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:05.951 20:39:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.951 20:39:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:05.951 20:39:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.951 00:23:05.951 real 0m10.918s 00:23:05.951 user 0m28.347s 00:23:05.951 sys 0m2.082s 00:23:05.951 20:39:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:05.951 ************************************ 00:23:05.951 END TEST fio_dif_digest 00:23:05.951 ************************************ 00:23:05.951 20:39:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:23:05.951 20:39:25 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:23:05.952 20:39:25 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:23:05.952 20:39:25 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:23:05.952 20:39:25 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:05.952 20:39:25 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:23:05.952 20:39:25 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:05.952 20:39:25 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:23:05.952 20:39:25 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:05.952 20:39:25 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:05.952 rmmod nvme_tcp 00:23:05.952 rmmod nvme_fabrics 00:23:05.952 rmmod nvme_keyring 00:23:05.952 20:39:26 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:05.952 20:39:26 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:23:05.952 20:39:26 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:23:05.952 20:39:26 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 97650 ']' 00:23:05.952 20:39:26 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 97650 00:23:05.952 20:39:26 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 97650 ']' 00:23:05.952 20:39:26 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 97650 00:23:05.952 20:39:26 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:23:05.952 20:39:26 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:05.952 20:39:26 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 97650 00:23:05.952 20:39:26 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:05.952 20:39:26 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:05.952 killing process with pid 97650 00:23:05.952 20:39:26 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 97650' 00:23:05.952 20:39:26 nvmf_dif -- common/autotest_common.sh@967 -- # kill 97650 00:23:05.952 20:39:26 nvmf_dif -- common/autotest_common.sh@972 -- # wait 97650 00:23:05.952 20:39:26 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:23:05.952 20:39:26 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:05.952 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:05.952 Waiting for block devices as requested 00:23:05.952 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:05.952 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:05.952 20:39:26 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:05.952 20:39:26 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:05.952 20:39:26 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:05.952 20:39:26 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:05.952 20:39:26 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.952 20:39:26 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:05.952 20:39:26 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.952 20:39:26 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:05.952 00:23:05.952 real 0m58.446s 00:23:05.952 user 3m47.204s 00:23:05.952 sys 0m16.125s 00:23:05.952 20:39:26 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:05.952 20:39:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:05.952 ************************************ 00:23:05.952 END TEST nvmf_dif 00:23:05.952 ************************************ 00:23:05.952 20:39:26 -- common/autotest_common.sh@1142 -- # return 0 00:23:05.952 20:39:26 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:05.952 20:39:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:05.952 20:39:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:05.952 20:39:26 -- common/autotest_common.sh@10 -- # set +x 00:23:05.952 ************************************ 00:23:05.952 START TEST nvmf_abort_qd_sizes 00:23:05.952 ************************************ 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:23:05.952 * Looking for test storage... 00:23:05.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:05.952 Cannot find device "nvmf_tgt_br" 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:05.952 Cannot find device "nvmf_tgt_br2" 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:05.952 Cannot find device "nvmf_tgt_br" 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:05.952 Cannot find device "nvmf_tgt_br2" 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:23:05.952 20:39:26 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:05.953 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:05.953 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:05.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:23:05.953 00:23:05.953 --- 10.0.0.2 ping statistics --- 00:23:05.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.953 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:05.953 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:05.953 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:23:05.953 00:23:05.953 --- 10.0.0.3 ping statistics --- 00:23:05.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.953 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:05.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:23:05.953 00:23:05.953 --- 10.0.0.1 ping statistics --- 00:23:05.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.953 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:23:05.953 20:39:27 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:06.520 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:06.520 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:06.520 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:06.778 20:39:28 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.778 20:39:28 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:06.778 20:39:28 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:06.778 20:39:28 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.778 20:39:28 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:06.778 20:39:28 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:06.778 20:39:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:23:06.778 20:39:28 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:06.778 20:39:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:06.778 20:39:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:06.778 20:39:28 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=98960 00:23:06.778 20:39:28 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:23:06.778 20:39:28 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 98960 00:23:06.778 20:39:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 98960 ']' 00:23:06.778 20:39:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.778 20:39:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:06.778 20:39:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.778 20:39:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:06.778 20:39:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:06.778 [2024-07-15 20:39:28.140480] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:23:06.778 [2024-07-15 20:39:28.141146] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.778 [2024-07-15 20:39:28.274730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:07.037 [2024-07-15 20:39:28.369149] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.037 [2024-07-15 20:39:28.369573] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.037 [2024-07-15 20:39:28.369721] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.037 [2024-07-15 20:39:28.369753] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.037 [2024-07-15 20:39:28.369767] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.037 [2024-07-15 20:39:28.369923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.037 [2024-07-15 20:39:28.370508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.037 [2024-07-15 20:39:28.370615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.037 [2024-07-15 20:39:28.370746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:07.970 20:39:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:07.970 ************************************ 00:23:07.970 START TEST spdk_target_abort 00:23:07.970 ************************************ 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:07.970 spdk_targetn1 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:07.970 [2024-07-15 20:39:29.272022] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:07.970 [2024-07-15 20:39:29.300559] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.970 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:23:07.971 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:07.971 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:07.971 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:23:07.971 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:07.971 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:07.971 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:07.971 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:07.971 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:07.971 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:07.971 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:07.971 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:07.971 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:07.971 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:07.971 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:23:07.971 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:07.971 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:07.971 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:07.971 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:07.971 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:07.971 20:39:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:11.250 Initializing NVMe Controllers 00:23:11.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:11.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:11.250 Initialization complete. Launching workers. 00:23:11.250 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11530, failed: 0 00:23:11.250 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1061, failed to submit 10469 00:23:11.250 success 786, unsuccess 275, failed 0 00:23:11.250 20:39:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:11.250 20:39:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:14.527 Initializing NVMe Controllers 00:23:14.527 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:14.527 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:14.527 Initialization complete. Launching workers. 00:23:14.527 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5883, failed: 0 00:23:14.527 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1206, failed to submit 4677 00:23:14.527 success 270, unsuccess 936, failed 0 00:23:14.527 20:39:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:14.527 20:39:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:17.807 Initializing NVMe Controllers 00:23:17.807 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:23:17.807 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:17.807 Initialization complete. Launching workers. 00:23:17.807 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29545, failed: 0 00:23:17.807 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2648, failed to submit 26897 00:23:17.807 success 395, unsuccess 2253, failed 0 00:23:17.807 20:39:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:23:17.807 20:39:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.807 20:39:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:17.807 20:39:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.807 20:39:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:23:17.807 20:39:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.807 20:39:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:18.738 20:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.738 20:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 98960 00:23:18.738 20:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 98960 ']' 00:23:18.738 20:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 98960 00:23:18.738 20:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:23:18.738 20:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:18.738 20:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 98960 00:23:18.738 20:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:18.738 20:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:18.738 killing process with pid 98960 00:23:18.738 20:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 98960' 00:23:18.738 20:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 98960 00:23:18.738 20:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 98960 00:23:18.996 00:23:18.996 real 0m11.143s 00:23:18.996 user 0m44.349s 00:23:18.996 sys 0m1.775s 00:23:18.996 20:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:18.996 20:39:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:18.996 ************************************ 00:23:18.996 END TEST spdk_target_abort 00:23:18.996 ************************************ 00:23:18.996 20:39:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:23:18.996 20:39:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:23:18.996 20:39:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:18.996 20:39:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:18.997 20:39:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:18.997 ************************************ 00:23:18.997 START TEST kernel_target_abort 00:23:18.997 ************************************ 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:18.997 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:19.305 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:19.305 Waiting for block devices as requested 00:23:19.305 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:19.564 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:19.564 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:19.564 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:19.564 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:19.564 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:19.564 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:19.564 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:19.564 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:19.564 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:19.564 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:19.564 No valid GPT data, bailing 00:23:19.564 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:19.564 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:19.564 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:19.564 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:19.564 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:19.564 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:19.564 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:23:19.564 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:23:19.564 20:39:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:19.564 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:19.564 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:23:19.564 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:23:19.564 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:19.564 No valid GPT data, bailing 00:23:19.564 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:19.823 No valid GPT data, bailing 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:19.823 No valid GPT data, bailing 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 --hostid=ec49175a-6012-419b-81e2-f6fecd100da5 -a 10.0.0.1 -t tcp -s 4420 00:23:19.823 00:23:19.823 Discovery Log Number of Records 2, Generation counter 2 00:23:19.823 =====Discovery Log Entry 0====== 00:23:19.823 trtype: tcp 00:23:19.823 adrfam: ipv4 00:23:19.823 subtype: current discovery subsystem 00:23:19.823 treq: not specified, sq flow control disable supported 00:23:19.823 portid: 1 00:23:19.823 trsvcid: 4420 00:23:19.823 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:19.823 traddr: 10.0.0.1 00:23:19.823 eflags: none 00:23:19.823 sectype: none 00:23:19.823 =====Discovery Log Entry 1====== 00:23:19.823 trtype: tcp 00:23:19.823 adrfam: ipv4 00:23:19.823 subtype: nvme subsystem 00:23:19.823 treq: not specified, sq flow control disable supported 00:23:19.823 portid: 1 00:23:19.823 trsvcid: 4420 00:23:19.823 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:19.823 traddr: 10.0.0.1 00:23:19.823 eflags: none 00:23:19.823 sectype: none 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:19.823 20:39:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:23.106 Initializing NVMe Controllers 00:23:23.106 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:23.106 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:23.106 Initialization complete. Launching workers. 00:23:23.106 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32312, failed: 0 00:23:23.106 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32312, failed to submit 0 00:23:23.106 success 0, unsuccess 32312, failed 0 00:23:23.106 20:39:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:23.106 20:39:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:26.389 Initializing NVMe Controllers 00:23:26.389 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:26.389 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:26.389 Initialization complete. Launching workers. 00:23:26.389 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64441, failed: 0 00:23:26.389 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27434, failed to submit 37007 00:23:26.389 success 0, unsuccess 27434, failed 0 00:23:26.389 20:39:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:23:26.389 20:39:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:29.662 Initializing NVMe Controllers 00:23:29.662 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:29.662 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:23:29.662 Initialization complete. Launching workers. 00:23:29.662 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72191, failed: 0 00:23:29.662 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18036, failed to submit 54155 00:23:29.662 success 0, unsuccess 18036, failed 0 00:23:29.662 20:39:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:23:29.662 20:39:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:29.662 20:39:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:23:29.662 20:39:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:29.662 20:39:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:29.662 20:39:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:29.662 20:39:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:29.662 20:39:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:29.662 20:39:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:29.662 20:39:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:30.227 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:32.125 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:32.125 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:32.125 00:23:32.125 real 0m12.918s 00:23:32.125 user 0m6.231s 00:23:32.125 sys 0m4.008s 00:23:32.125 20:39:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:32.125 20:39:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:23:32.125 ************************************ 00:23:32.125 END TEST kernel_target_abort 00:23:32.125 ************************************ 00:23:32.125 20:39:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:23:32.125 20:39:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:32.125 20:39:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:23:32.125 20:39:53 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:32.125 20:39:53 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:23:32.125 20:39:53 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:32.125 20:39:53 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:23:32.125 20:39:53 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:32.125 20:39:53 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:32.125 rmmod nvme_tcp 00:23:32.125 rmmod nvme_fabrics 00:23:32.125 rmmod nvme_keyring 00:23:32.125 20:39:53 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:32.125 20:39:53 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:23:32.125 20:39:53 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:23:32.125 20:39:53 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 98960 ']' 00:23:32.125 20:39:53 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 98960 00:23:32.125 20:39:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 98960 ']' 00:23:32.125 20:39:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 98960 00:23:32.125 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (98960) - No such process 00:23:32.125 Process with pid 98960 is not found 00:23:32.125 20:39:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 98960 is not found' 00:23:32.125 20:39:53 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:23:32.125 20:39:53 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:32.382 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:32.382 Waiting for block devices as requested 00:23:32.382 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:32.382 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:32.641 20:39:53 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:32.641 20:39:53 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:32.641 20:39:53 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:32.641 20:39:53 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:32.641 20:39:53 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.641 20:39:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:32.641 20:39:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.641 20:39:53 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:32.641 00:23:32.641 real 0m27.143s 00:23:32.641 user 0m51.661s 00:23:32.641 sys 0m7.036s 00:23:32.641 20:39:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:32.641 20:39:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:23:32.641 ************************************ 00:23:32.641 END TEST nvmf_abort_qd_sizes 00:23:32.641 ************************************ 00:23:32.641 20:39:53 -- common/autotest_common.sh@1142 -- # return 0 00:23:32.641 20:39:53 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:32.641 20:39:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:32.641 20:39:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:32.641 20:39:53 -- common/autotest_common.sh@10 -- # set +x 00:23:32.641 ************************************ 00:23:32.641 START TEST keyring_file 00:23:32.641 ************************************ 00:23:32.641 20:39:53 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:23:32.641 * Looking for test storage... 00:23:32.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:32.641 20:39:54 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:32.641 20:39:54 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:32.641 20:39:54 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.641 20:39:54 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.641 20:39:54 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.641 20:39:54 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.641 20:39:54 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.641 20:39:54 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.641 20:39:54 keyring_file -- paths/export.sh@5 -- # export PATH 00:23:32.641 20:39:54 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@47 -- # : 0 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:32.641 20:39:54 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:32.641 20:39:54 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:32.641 20:39:54 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:32.641 20:39:54 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:23:32.641 20:39:54 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:23:32.641 20:39:54 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:23:32.641 20:39:54 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:32.641 20:39:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:32.641 20:39:54 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:32.641 20:39:54 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:32.641 20:39:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:32.641 20:39:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:32.641 20:39:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jxZa09aX2Q 00:23:32.641 20:39:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:32.641 20:39:54 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:32.641 20:39:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jxZa09aX2Q 00:23:32.899 20:39:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jxZa09aX2Q 00:23:32.899 20:39:54 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.jxZa09aX2Q 00:23:32.899 20:39:54 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:23:32.899 20:39:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:32.899 20:39:54 keyring_file -- keyring/common.sh@17 -- # name=key1 00:23:32.899 20:39:54 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:32.899 20:39:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:32.899 20:39:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:32.899 20:39:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1N9sdP5hvZ 00:23:32.899 20:39:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:32.899 20:39:54 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:32.899 20:39:54 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:32.899 20:39:54 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:32.899 20:39:54 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:23:32.899 20:39:54 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:32.899 20:39:54 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:32.899 20:39:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1N9sdP5hvZ 00:23:32.899 20:39:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1N9sdP5hvZ 00:23:32.899 20:39:54 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.1N9sdP5hvZ 00:23:32.899 20:39:54 keyring_file -- keyring/file.sh@30 -- # tgtpid=99837 00:23:32.899 20:39:54 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:32.899 20:39:54 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99837 00:23:32.899 20:39:54 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 99837 ']' 00:23:32.899 20:39:54 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.899 20:39:54 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:32.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.899 20:39:54 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.899 20:39:54 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:32.899 20:39:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:32.899 [2024-07-15 20:39:54.263319] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:23:32.899 [2024-07-15 20:39:54.263439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99837 ] 00:23:33.156 [2024-07-15 20:39:54.403370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.156 [2024-07-15 20:39:54.489146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.414 20:39:54 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:33.414 20:39:54 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:23:33.414 20:39:54 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:23:33.414 20:39:54 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.414 20:39:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:33.414 [2024-07-15 20:39:54.673202] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.414 null0 00:23:33.414 [2024-07-15 20:39:54.705172] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:33.414 [2024-07-15 20:39:54.705408] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:33.414 [2024-07-15 20:39:54.713168] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:33.414 20:39:54 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.414 20:39:54 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:33.415 20:39:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:23:33.415 20:39:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:33.415 20:39:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:33.415 20:39:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:33.415 20:39:54 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:33.415 20:39:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:33.415 20:39:54 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:23:33.415 20:39:54 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.415 20:39:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:33.415 [2024-07-15 20:39:54.725181] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:23:33.415 2024/07/15 20:39:54 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:23:33.415 request: 00:23:33.415 { 00:23:33.415 "method": "nvmf_subsystem_add_listener", 00:23:33.415 "params": { 00:23:33.415 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:23:33.415 "secure_channel": false, 00:23:33.415 "listen_address": { 00:23:33.415 "trtype": "tcp", 00:23:33.415 "traddr": "127.0.0.1", 00:23:33.415 "trsvcid": "4420" 00:23:33.415 } 00:23:33.415 } 00:23:33.415 } 00:23:33.415 Got JSON-RPC error response 00:23:33.415 GoRPCClient: error on JSON-RPC call 00:23:33.415 20:39:54 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:33.415 20:39:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:23:33.415 20:39:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:33.415 20:39:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:33.415 20:39:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:33.415 20:39:54 keyring_file -- keyring/file.sh@46 -- # bperfpid=99858 00:23:33.415 20:39:54 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:23:33.415 20:39:54 keyring_file -- keyring/file.sh@48 -- # waitforlisten 99858 /var/tmp/bperf.sock 00:23:33.415 20:39:54 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 99858 ']' 00:23:33.415 20:39:54 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:33.415 20:39:54 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:33.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:33.415 20:39:54 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:33.415 20:39:54 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:33.415 20:39:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:33.415 [2024-07-15 20:39:54.780168] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:23:33.415 [2024-07-15 20:39:54.780263] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99858 ] 00:23:33.415 [2024-07-15 20:39:54.911467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.673 [2024-07-15 20:39:55.000982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.635 20:39:55 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:34.635 20:39:55 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:23:34.635 20:39:55 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jxZa09aX2Q 00:23:34.635 20:39:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jxZa09aX2Q 00:23:34.635 20:39:56 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.1N9sdP5hvZ 00:23:34.635 20:39:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.1N9sdP5hvZ 00:23:34.893 20:39:56 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:23:34.893 20:39:56 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:23:34.893 20:39:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:34.893 20:39:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:34.893 20:39:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:35.151 20:39:56 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.jxZa09aX2Q == \/\t\m\p\/\t\m\p\.\j\x\Z\a\0\9\a\X\2\Q ]] 00:23:35.151 20:39:56 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:23:35.151 20:39:56 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:23:35.151 20:39:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:35.151 20:39:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:35.151 20:39:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:35.412 20:39:56 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.1N9sdP5hvZ == \/\t\m\p\/\t\m\p\.\1\N\9\s\d\P\5\h\v\Z ]] 00:23:35.412 20:39:56 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:23:35.412 20:39:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:35.412 20:39:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:35.412 20:39:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:35.412 20:39:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:35.412 20:39:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:35.670 20:39:57 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:23:35.670 20:39:57 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:23:35.670 20:39:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:35.670 20:39:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:35.670 20:39:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:35.670 20:39:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:35.670 20:39:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:36.233 20:39:57 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:23:36.233 20:39:57 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:36.233 20:39:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:36.490 [2024-07-15 20:39:57.790836] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.490 nvme0n1 00:23:36.490 20:39:57 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:23:36.490 20:39:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:36.490 20:39:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:36.490 20:39:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:36.490 20:39:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:36.490 20:39:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:36.748 20:39:58 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:23:36.748 20:39:58 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:23:36.748 20:39:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:36.748 20:39:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:36.748 20:39:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:36.748 20:39:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:36.748 20:39:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:37.335 20:39:58 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:23:37.335 20:39:58 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:37.335 Running I/O for 1 seconds... 00:23:38.266 00:23:38.266 Latency(us) 00:23:38.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.266 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:23:38.266 nvme0n1 : 1.01 10745.42 41.97 0.00 0.00 11870.99 6076.97 23831.27 00:23:38.266 =================================================================================================================== 00:23:38.266 Total : 10745.42 41.97 0.00 0.00 11870.99 6076.97 23831.27 00:23:38.266 0 00:23:38.266 20:39:59 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:38.266 20:39:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:38.524 20:40:00 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:23:38.524 20:40:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:38.524 20:40:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:38.524 20:40:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:38.524 20:40:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:38.524 20:40:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:39.088 20:40:00 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:23:39.088 20:40:00 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:23:39.088 20:40:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:39.088 20:40:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:39.088 20:40:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:39.088 20:40:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:39.088 20:40:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:39.383 20:40:00 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:23:39.383 20:40:00 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:39.383 20:40:00 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:23:39.384 20:40:00 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:39.384 20:40:00 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:23:39.384 20:40:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:39.384 20:40:00 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:23:39.384 20:40:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:39.384 20:40:00 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:39.384 20:40:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:23:39.681 [2024-07-15 20:40:00.862135] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:39.681 [2024-07-15 20:40:00.862569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1df30 (107): Transport endpoint is not connected 00:23:39.681 [2024-07-15 20:40:00.863554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd1df30 (9): Bad file descriptor 00:23:39.681 [2024-07-15 20:40:00.864548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:39.681 [2024-07-15 20:40:00.864577] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:39.681 [2024-07-15 20:40:00.864588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:39.681 2024/07/15 20:40:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:39.681 request: 00:23:39.681 { 00:23:39.681 "method": "bdev_nvme_attach_controller", 00:23:39.681 "params": { 00:23:39.681 "name": "nvme0", 00:23:39.681 "trtype": "tcp", 00:23:39.681 "traddr": "127.0.0.1", 00:23:39.681 "adrfam": "ipv4", 00:23:39.681 "trsvcid": "4420", 00:23:39.681 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:39.681 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:39.681 "prchk_reftag": false, 00:23:39.681 "prchk_guard": false, 00:23:39.681 "hdgst": false, 00:23:39.681 "ddgst": false, 00:23:39.681 "psk": "key1" 00:23:39.681 } 00:23:39.681 } 00:23:39.681 Got JSON-RPC error response 00:23:39.681 GoRPCClient: error on JSON-RPC call 00:23:39.681 20:40:00 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:23:39.681 20:40:00 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:39.681 20:40:00 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:39.681 20:40:00 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:39.681 20:40:00 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:23:39.681 20:40:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:39.681 20:40:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:39.682 20:40:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:39.682 20:40:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:39.682 20:40:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:39.940 20:40:01 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:23:39.940 20:40:01 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:23:39.940 20:40:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:39.940 20:40:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:39.940 20:40:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:39.940 20:40:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:39.940 20:40:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:40.198 20:40:01 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:23:40.198 20:40:01 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:23:40.198 20:40:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:40.455 20:40:01 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:23:40.455 20:40:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:23:40.713 20:40:02 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:23:40.713 20:40:02 keyring_file -- keyring/file.sh@77 -- # jq length 00:23:40.713 20:40:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:40.971 20:40:02 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:23:40.971 20:40:02 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.jxZa09aX2Q 00:23:40.971 20:40:02 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.jxZa09aX2Q 00:23:40.971 20:40:02 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:23:40.971 20:40:02 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.jxZa09aX2Q 00:23:40.971 20:40:02 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:23:40.971 20:40:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:40.971 20:40:02 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:23:40.971 20:40:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:40.971 20:40:02 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jxZa09aX2Q 00:23:40.971 20:40:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jxZa09aX2Q 00:23:41.229 [2024-07-15 20:40:02.701975] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.jxZa09aX2Q': 0100660 00:23:41.229 [2024-07-15 20:40:02.702030] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:41.229 2024/07/15 20:40:02 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.jxZa09aX2Q], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:23:41.229 request: 00:23:41.229 { 00:23:41.229 "method": "keyring_file_add_key", 00:23:41.229 "params": { 00:23:41.229 "name": "key0", 00:23:41.229 "path": "/tmp/tmp.jxZa09aX2Q" 00:23:41.229 } 00:23:41.229 } 00:23:41.229 Got JSON-RPC error response 00:23:41.229 GoRPCClient: error on JSON-RPC call 00:23:41.229 20:40:02 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:23:41.229 20:40:02 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:41.229 20:40:02 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:41.229 20:40:02 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:41.229 20:40:02 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.jxZa09aX2Q 00:23:41.229 20:40:02 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jxZa09aX2Q 00:23:41.229 20:40:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jxZa09aX2Q 00:23:41.796 20:40:03 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.jxZa09aX2Q 00:23:41.796 20:40:03 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:23:41.796 20:40:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:41.796 20:40:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:41.796 20:40:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:41.796 20:40:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:41.796 20:40:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:42.054 20:40:03 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:23:42.054 20:40:03 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:42.054 20:40:03 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:23:42.054 20:40:03 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:42.054 20:40:03 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:23:42.054 20:40:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.054 20:40:03 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:23:42.054 20:40:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:42.054 20:40:03 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:42.054 20:40:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:42.322 [2024-07-15 20:40:03.754191] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.jxZa09aX2Q': No such file or directory 00:23:42.322 [2024-07-15 20:40:03.754240] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:23:42.322 [2024-07-15 20:40:03.754267] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:23:42.322 [2024-07-15 20:40:03.754276] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:42.322 [2024-07-15 20:40:03.754285] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:23:42.322 2024/07/15 20:40:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:23:42.322 request: 00:23:42.322 { 00:23:42.322 "method": "bdev_nvme_attach_controller", 00:23:42.322 "params": { 00:23:42.322 "name": "nvme0", 00:23:42.322 "trtype": "tcp", 00:23:42.322 "traddr": "127.0.0.1", 00:23:42.322 "adrfam": "ipv4", 00:23:42.322 "trsvcid": "4420", 00:23:42.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:42.322 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:42.322 "prchk_reftag": false, 00:23:42.322 "prchk_guard": false, 00:23:42.322 "hdgst": false, 00:23:42.322 "ddgst": false, 00:23:42.322 "psk": "key0" 00:23:42.322 } 00:23:42.322 } 00:23:42.322 Got JSON-RPC error response 00:23:42.322 GoRPCClient: error on JSON-RPC call 00:23:42.322 20:40:03 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:23:42.322 20:40:03 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:42.322 20:40:03 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:42.322 20:40:03 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:42.322 20:40:03 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:23:42.322 20:40:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:42.581 20:40:04 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:23:42.581 20:40:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:23:42.581 20:40:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:23:42.581 20:40:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:42.581 20:40:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:23:42.581 20:40:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:23:42.581 20:40:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1HW2klfFYX 00:23:42.581 20:40:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:42.581 20:40:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:42.581 20:40:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:23:42.581 20:40:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:42.581 20:40:04 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:42.581 20:40:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:23:42.581 20:40:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:23:42.839 20:40:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1HW2klfFYX 00:23:42.839 20:40:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1HW2klfFYX 00:23:42.839 20:40:04 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.1HW2klfFYX 00:23:42.839 20:40:04 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1HW2klfFYX 00:23:42.839 20:40:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1HW2klfFYX 00:23:43.096 20:40:04 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:43.096 20:40:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:43.354 nvme0n1 00:23:43.354 20:40:04 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:23:43.354 20:40:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:43.354 20:40:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:43.354 20:40:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:43.354 20:40:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:43.354 20:40:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:43.611 20:40:05 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:23:43.611 20:40:05 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:23:43.611 20:40:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:23:44.177 20:40:05 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:23:44.177 20:40:05 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:23:44.177 20:40:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:44.177 20:40:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:44.177 20:40:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:44.435 20:40:05 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:23:44.435 20:40:05 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:23:44.435 20:40:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:44.435 20:40:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:44.435 20:40:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:44.435 20:40:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:44.435 20:40:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:44.694 20:40:06 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:23:44.694 20:40:06 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:44.694 20:40:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:44.952 20:40:06 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:23:44.952 20:40:06 keyring_file -- keyring/file.sh@104 -- # jq length 00:23:44.952 20:40:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:45.519 20:40:06 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:23:45.519 20:40:06 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1HW2klfFYX 00:23:45.519 20:40:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1HW2klfFYX 00:23:45.519 20:40:06 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.1N9sdP5hvZ 00:23:45.519 20:40:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.1N9sdP5hvZ 00:23:45.777 20:40:07 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:45.777 20:40:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:23:46.344 nvme0n1 00:23:46.344 20:40:07 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:23:46.344 20:40:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:23:46.603 20:40:07 keyring_file -- keyring/file.sh@112 -- # config='{ 00:23:46.603 "subsystems": [ 00:23:46.603 { 00:23:46.603 "subsystem": "keyring", 00:23:46.603 "config": [ 00:23:46.603 { 00:23:46.603 "method": "keyring_file_add_key", 00:23:46.603 "params": { 00:23:46.603 "name": "key0", 00:23:46.603 "path": "/tmp/tmp.1HW2klfFYX" 00:23:46.603 } 00:23:46.603 }, 00:23:46.603 { 00:23:46.603 "method": "keyring_file_add_key", 00:23:46.603 "params": { 00:23:46.603 "name": "key1", 00:23:46.603 "path": "/tmp/tmp.1N9sdP5hvZ" 00:23:46.603 } 00:23:46.603 } 00:23:46.603 ] 00:23:46.603 }, 00:23:46.603 { 00:23:46.603 "subsystem": "iobuf", 00:23:46.603 "config": [ 00:23:46.603 { 00:23:46.603 "method": "iobuf_set_options", 00:23:46.603 "params": { 00:23:46.603 "large_bufsize": 135168, 00:23:46.603 "large_pool_count": 1024, 00:23:46.603 "small_bufsize": 8192, 00:23:46.603 "small_pool_count": 8192 00:23:46.603 } 00:23:46.603 } 00:23:46.603 ] 00:23:46.603 }, 00:23:46.603 { 00:23:46.603 "subsystem": "sock", 00:23:46.603 "config": [ 00:23:46.603 { 00:23:46.603 "method": "sock_set_default_impl", 00:23:46.603 "params": { 00:23:46.603 "impl_name": "posix" 00:23:46.603 } 00:23:46.603 }, 00:23:46.603 { 00:23:46.603 "method": "sock_impl_set_options", 00:23:46.603 "params": { 00:23:46.603 "enable_ktls": false, 00:23:46.603 "enable_placement_id": 0, 00:23:46.603 "enable_quickack": false, 00:23:46.603 "enable_recv_pipe": true, 00:23:46.603 "enable_zerocopy_send_client": false, 00:23:46.603 "enable_zerocopy_send_server": true, 00:23:46.603 "impl_name": "ssl", 00:23:46.603 "recv_buf_size": 4096, 00:23:46.603 "send_buf_size": 4096, 00:23:46.603 "tls_version": 0, 00:23:46.603 "zerocopy_threshold": 0 00:23:46.603 } 00:23:46.603 }, 00:23:46.603 { 00:23:46.603 "method": "sock_impl_set_options", 00:23:46.603 "params": { 00:23:46.603 "enable_ktls": false, 00:23:46.603 "enable_placement_id": 0, 00:23:46.603 "enable_quickack": false, 00:23:46.603 "enable_recv_pipe": true, 00:23:46.603 "enable_zerocopy_send_client": false, 00:23:46.603 "enable_zerocopy_send_server": true, 00:23:46.603 "impl_name": "posix", 00:23:46.603 "recv_buf_size": 2097152, 00:23:46.603 "send_buf_size": 2097152, 00:23:46.603 "tls_version": 0, 00:23:46.603 "zerocopy_threshold": 0 00:23:46.603 } 00:23:46.603 } 00:23:46.603 ] 00:23:46.603 }, 00:23:46.603 { 00:23:46.603 "subsystem": "vmd", 00:23:46.603 "config": [] 00:23:46.603 }, 00:23:46.603 { 00:23:46.603 "subsystem": "accel", 00:23:46.603 "config": [ 00:23:46.603 { 00:23:46.603 "method": "accel_set_options", 00:23:46.603 "params": { 00:23:46.603 "buf_count": 2048, 00:23:46.603 "large_cache_size": 16, 00:23:46.603 "sequence_count": 2048, 00:23:46.603 "small_cache_size": 128, 00:23:46.603 "task_count": 2048 00:23:46.603 } 00:23:46.603 } 00:23:46.603 ] 00:23:46.603 }, 00:23:46.603 { 00:23:46.603 "subsystem": "bdev", 00:23:46.603 "config": [ 00:23:46.603 { 00:23:46.603 "method": "bdev_set_options", 00:23:46.603 "params": { 00:23:46.603 "bdev_auto_examine": true, 00:23:46.603 "bdev_io_cache_size": 256, 00:23:46.603 "bdev_io_pool_size": 65535, 00:23:46.603 "iobuf_large_cache_size": 16, 00:23:46.603 "iobuf_small_cache_size": 128 00:23:46.603 } 00:23:46.603 }, 00:23:46.603 { 00:23:46.603 "method": "bdev_raid_set_options", 00:23:46.603 "params": { 00:23:46.603 "process_window_size_kb": 1024 00:23:46.603 } 00:23:46.603 }, 00:23:46.603 { 00:23:46.603 "method": "bdev_iscsi_set_options", 00:23:46.603 "params": { 00:23:46.603 "timeout_sec": 30 00:23:46.603 } 00:23:46.603 }, 00:23:46.603 { 00:23:46.603 "method": "bdev_nvme_set_options", 00:23:46.603 "params": { 00:23:46.603 "action_on_timeout": "none", 00:23:46.603 "allow_accel_sequence": false, 00:23:46.603 "arbitration_burst": 0, 00:23:46.603 "bdev_retry_count": 3, 00:23:46.603 "ctrlr_loss_timeout_sec": 0, 00:23:46.603 "delay_cmd_submit": true, 00:23:46.603 "dhchap_dhgroups": [ 00:23:46.603 "null", 00:23:46.603 "ffdhe2048", 00:23:46.603 "ffdhe3072", 00:23:46.603 "ffdhe4096", 00:23:46.603 "ffdhe6144", 00:23:46.603 "ffdhe8192" 00:23:46.603 ], 00:23:46.603 "dhchap_digests": [ 00:23:46.603 "sha256", 00:23:46.603 "sha384", 00:23:46.603 "sha512" 00:23:46.603 ], 00:23:46.603 "disable_auto_failback": false, 00:23:46.603 "fast_io_fail_timeout_sec": 0, 00:23:46.603 "generate_uuids": false, 00:23:46.603 "high_priority_weight": 0, 00:23:46.603 "io_path_stat": false, 00:23:46.603 "io_queue_requests": 512, 00:23:46.603 "keep_alive_timeout_ms": 10000, 00:23:46.603 "low_priority_weight": 0, 00:23:46.603 "medium_priority_weight": 0, 00:23:46.603 "nvme_adminq_poll_period_us": 10000, 00:23:46.603 "nvme_error_stat": false, 00:23:46.603 "nvme_ioq_poll_period_us": 0, 00:23:46.603 "rdma_cm_event_timeout_ms": 0, 00:23:46.603 "rdma_max_cq_size": 0, 00:23:46.603 "rdma_srq_size": 0, 00:23:46.603 "reconnect_delay_sec": 0, 00:23:46.603 "timeout_admin_us": 0, 00:23:46.603 "timeout_us": 0, 00:23:46.603 "transport_ack_timeout": 0, 00:23:46.603 "transport_retry_count": 4, 00:23:46.603 "transport_tos": 0 00:23:46.603 } 00:23:46.603 }, 00:23:46.603 { 00:23:46.603 "method": "bdev_nvme_attach_controller", 00:23:46.603 "params": { 00:23:46.603 "adrfam": "IPv4", 00:23:46.603 "ctrlr_loss_timeout_sec": 0, 00:23:46.603 "ddgst": false, 00:23:46.603 "fast_io_fail_timeout_sec": 0, 00:23:46.603 "hdgst": false, 00:23:46.603 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:46.603 "name": "nvme0", 00:23:46.603 "prchk_guard": false, 00:23:46.603 "prchk_reftag": false, 00:23:46.603 "psk": "key0", 00:23:46.603 "reconnect_delay_sec": 0, 00:23:46.603 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:46.603 "traddr": "127.0.0.1", 00:23:46.603 "trsvcid": "4420", 00:23:46.603 "trtype": "TCP" 00:23:46.603 } 00:23:46.603 }, 00:23:46.603 { 00:23:46.603 "method": "bdev_nvme_set_hotplug", 00:23:46.603 "params": { 00:23:46.603 "enable": false, 00:23:46.603 "period_us": 100000 00:23:46.603 } 00:23:46.603 }, 00:23:46.603 { 00:23:46.603 "method": "bdev_wait_for_examine" 00:23:46.603 } 00:23:46.603 ] 00:23:46.603 }, 00:23:46.603 { 00:23:46.603 "subsystem": "nbd", 00:23:46.603 "config": [] 00:23:46.603 } 00:23:46.603 ] 00:23:46.603 }' 00:23:46.603 20:40:07 keyring_file -- keyring/file.sh@114 -- # killprocess 99858 00:23:46.603 20:40:07 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 99858 ']' 00:23:46.603 20:40:07 keyring_file -- common/autotest_common.sh@952 -- # kill -0 99858 00:23:46.603 20:40:07 keyring_file -- common/autotest_common.sh@953 -- # uname 00:23:46.603 20:40:07 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:46.604 20:40:07 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99858 00:23:46.604 20:40:08 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:46.604 20:40:08 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:46.604 20:40:08 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99858' 00:23:46.604 killing process with pid 99858 00:23:46.604 20:40:08 keyring_file -- common/autotest_common.sh@967 -- # kill 99858 00:23:46.604 Received shutdown signal, test time was about 1.000000 seconds 00:23:46.604 00:23:46.604 Latency(us) 00:23:46.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.604 =================================================================================================================== 00:23:46.604 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:46.604 20:40:08 keyring_file -- common/autotest_common.sh@972 -- # wait 99858 00:23:46.862 20:40:08 keyring_file -- keyring/file.sh@117 -- # bperfpid=100341 00:23:46.862 20:40:08 keyring_file -- keyring/file.sh@119 -- # waitforlisten 100341 /var/tmp/bperf.sock 00:23:46.862 20:40:08 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 100341 ']' 00:23:46.862 20:40:08 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:46.862 20:40:08 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:23:46.862 20:40:08 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:46.862 20:40:08 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:46.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:46.862 20:40:08 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:46.862 20:40:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:46.862 20:40:08 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:23:46.862 "subsystems": [ 00:23:46.862 { 00:23:46.862 "subsystem": "keyring", 00:23:46.862 "config": [ 00:23:46.862 { 00:23:46.862 "method": "keyring_file_add_key", 00:23:46.862 "params": { 00:23:46.862 "name": "key0", 00:23:46.862 "path": "/tmp/tmp.1HW2klfFYX" 00:23:46.862 } 00:23:46.862 }, 00:23:46.863 { 00:23:46.863 "method": "keyring_file_add_key", 00:23:46.863 "params": { 00:23:46.863 "name": "key1", 00:23:46.863 "path": "/tmp/tmp.1N9sdP5hvZ" 00:23:46.863 } 00:23:46.863 } 00:23:46.863 ] 00:23:46.863 }, 00:23:46.863 { 00:23:46.863 "subsystem": "iobuf", 00:23:46.863 "config": [ 00:23:46.863 { 00:23:46.863 "method": "iobuf_set_options", 00:23:46.863 "params": { 00:23:46.863 "large_bufsize": 135168, 00:23:46.863 "large_pool_count": 1024, 00:23:46.863 "small_bufsize": 8192, 00:23:46.863 "small_pool_count": 8192 00:23:46.863 } 00:23:46.863 } 00:23:46.863 ] 00:23:46.863 }, 00:23:46.863 { 00:23:46.863 "subsystem": "sock", 00:23:46.863 "config": [ 00:23:46.863 { 00:23:46.863 "method": "sock_set_default_impl", 00:23:46.863 "params": { 00:23:46.863 "impl_name": "posix" 00:23:46.863 } 00:23:46.863 }, 00:23:46.863 { 00:23:46.863 "method": "sock_impl_set_options", 00:23:46.863 "params": { 00:23:46.863 "enable_ktls": false, 00:23:46.863 "enable_placement_id": 0, 00:23:46.863 "enable_quickack": false, 00:23:46.863 "enable_recv_pipe": true, 00:23:46.863 "enable_zerocopy_send_client": false, 00:23:46.863 "enable_zerocopy_send_server": true, 00:23:46.863 "impl_name": "ssl", 00:23:46.863 "recv_buf_size": 4096, 00:23:46.863 "send_buf_size": 4096, 00:23:46.863 "tls_version": 0, 00:23:46.863 "zerocopy_threshold": 0 00:23:46.863 } 00:23:46.863 }, 00:23:46.863 { 00:23:46.863 "method": "sock_impl_set_options", 00:23:46.863 "params": { 00:23:46.863 "enable_ktls": false, 00:23:46.863 "enable_placement_id": 0, 00:23:46.863 "enable_quickack": false, 00:23:46.863 "enable_recv_pipe": true, 00:23:46.863 "enable_zerocopy_send_client": false, 00:23:46.863 "enable_zerocopy_send_server": true, 00:23:46.863 "impl_name": "posix", 00:23:46.863 "recv_buf_size": 2097152, 00:23:46.863 "send_buf_size": 2097152, 00:23:46.863 "tls_version": 0, 00:23:46.863 "zerocopy_threshold": 0 00:23:46.863 } 00:23:46.863 } 00:23:46.863 ] 00:23:46.863 }, 00:23:46.863 { 00:23:46.863 "subsystem": "vmd", 00:23:46.863 "config": [] 00:23:46.863 }, 00:23:46.863 { 00:23:46.863 "subsystem": "accel", 00:23:46.863 "config": [ 00:23:46.863 { 00:23:46.863 "method": "accel_set_options", 00:23:46.863 "params": { 00:23:46.863 "buf_count": 2048, 00:23:46.863 "large_cache_size": 16, 00:23:46.863 "sequence_count": 2048, 00:23:46.863 "small_cache_size": 128, 00:23:46.863 "task_count": 2048 00:23:46.863 } 00:23:46.863 } 00:23:46.863 ] 00:23:46.863 }, 00:23:46.863 { 00:23:46.863 "subsystem": "bdev", 00:23:46.863 "config": [ 00:23:46.863 { 00:23:46.863 "method": "bdev_set_options", 00:23:46.863 "params": { 00:23:46.863 "bdev_auto_examine": true, 00:23:46.863 "bdev_io_cache_size": 256, 00:23:46.863 "bdev_io_pool_size": 65535, 00:23:46.863 "iobuf_large_cache_size": 16, 00:23:46.863 "iobuf_small_cache_size": 128 00:23:46.863 } 00:23:46.863 }, 00:23:46.863 { 00:23:46.863 "method": "bdev_raid_set_options", 00:23:46.863 "params": { 00:23:46.863 "process_window_size_kb": 1024 00:23:46.863 } 00:23:46.863 }, 00:23:46.863 { 00:23:46.863 "method": "bdev_iscsi_set_options", 00:23:46.863 "params": { 00:23:46.863 "timeout_sec": 30 00:23:46.863 } 00:23:46.863 }, 00:23:46.863 { 00:23:46.863 "method": "bdev_nvme_set_options", 00:23:46.863 "params": { 00:23:46.863 "action_on_timeout": "none", 00:23:46.863 "allow_accel_sequence": false, 00:23:46.863 "arbitration_burst": 0, 00:23:46.863 "bdev_retry_count": 3, 00:23:46.863 "ctrlr_loss_timeout_sec": 0, 00:23:46.863 "delay_cmd_submit": true, 00:23:46.863 "dhchap_dhgroups": [ 00:23:46.863 "null", 00:23:46.863 "ffdhe2048", 00:23:46.863 "ffdhe3072", 00:23:46.863 "ffdhe4096", 00:23:46.863 "ffdhe6144", 00:23:46.863 "ffdhe8192" 00:23:46.863 ], 00:23:46.863 "dhchap_digests": [ 00:23:46.863 "sha256", 00:23:46.863 "sha384", 00:23:46.863 "sha512" 00:23:46.863 ], 00:23:46.863 "disable_auto_failback": false, 00:23:46.863 "fast_io_fail_timeout_sec": 0, 00:23:46.863 "generate_uuids": false, 00:23:46.863 "high_priority_weight": 0, 00:23:46.863 "io_path_stat": false, 00:23:46.863 "io_queue_requests": 512, 00:23:46.863 "keep_alive_timeout_ms": 10000, 00:23:46.863 "low_priority_weight": 0, 00:23:46.863 "medium_priority_weight": 0, 00:23:46.863 "nvme_adminq_poll_period_us": 10000, 00:23:46.863 "nvme_error_stat": false, 00:23:46.863 "nvme_ioq_poll_period_us": 0, 00:23:46.863 "rdma_cm_event_timeout_ms": 0, 00:23:46.863 "rdma_max_cq_size": 0, 00:23:46.863 "rdma_srq_size": 0, 00:23:46.863 "reconnect_delay_sec": 0, 00:23:46.863 "timeout_admin_us": 0, 00:23:46.863 "timeout_us": 0, 00:23:46.863 "transport_ack_timeout": 0, 00:23:46.863 "transport_retry_count": 4, 00:23:46.863 "transport_tos": 0 00:23:46.863 } 00:23:46.863 }, 00:23:46.863 { 00:23:46.863 "method": "bdev_nvme_attach_controller", 00:23:46.863 "params": { 00:23:46.863 "adrfam": "IPv4", 00:23:46.863 "ctrlr_loss_timeout_sec": 0, 00:23:46.863 "ddgst": false, 00:23:46.863 "fast_io_fail_timeout_sec": 0, 00:23:46.863 "hdgst": false, 00:23:46.863 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:46.863 "name": "nvme0", 00:23:46.863 "prchk_guard": false, 00:23:46.863 "prchk_reftag": false, 00:23:46.863 "psk": "key0", 00:23:46.863 "reconnect_delay_sec": 0, 00:23:46.863 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:46.863 "traddr": "127.0.0.1", 00:23:46.863 "trsvcid": "4420", 00:23:46.863 "trtype": "TCP" 00:23:46.863 } 00:23:46.863 }, 00:23:46.863 { 00:23:46.863 "method": "bdev_nvme_set_hotplug", 00:23:46.863 "params": { 00:23:46.863 "enable": false, 00:23:46.863 "period_us": 100000 00:23:46.863 } 00:23:46.863 }, 00:23:46.863 { 00:23:46.863 "method": "bdev_wait_for_examine" 00:23:46.863 } 00:23:46.863 ] 00:23:46.863 }, 00:23:46.863 { 00:23:46.863 "subsystem": "nbd", 00:23:46.863 "config": [] 00:23:46.863 } 00:23:46.863 ] 00:23:46.863 }' 00:23:46.863 [2024-07-15 20:40:08.232724] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:23:46.863 [2024-07-15 20:40:08.232909] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100341 ] 00:23:47.122 [2024-07-15 20:40:08.370562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.122 [2024-07-15 20:40:08.459519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.122 [2024-07-15 20:40:08.611353] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:48.055 20:40:09 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:48.055 20:40:09 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:23:48.056 20:40:09 keyring_file -- keyring/file.sh@120 -- # jq length 00:23:48.056 20:40:09 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:23:48.056 20:40:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:48.314 20:40:09 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:23:48.314 20:40:09 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:23:48.314 20:40:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:23:48.314 20:40:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:48.314 20:40:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:48.314 20:40:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:48.314 20:40:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:23:48.572 20:40:09 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:23:48.572 20:40:09 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:23:48.572 20:40:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:23:48.572 20:40:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:23:48.572 20:40:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:48.572 20:40:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:48.572 20:40:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:23:48.830 20:40:10 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:23:48.830 20:40:10 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:23:48.830 20:40:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:23:48.830 20:40:10 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:23:49.088 20:40:10 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:23:49.088 20:40:10 keyring_file -- keyring/file.sh@1 -- # cleanup 00:23:49.088 20:40:10 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.1HW2klfFYX /tmp/tmp.1N9sdP5hvZ 00:23:49.088 20:40:10 keyring_file -- keyring/file.sh@20 -- # killprocess 100341 00:23:49.088 20:40:10 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 100341 ']' 00:23:49.088 20:40:10 keyring_file -- common/autotest_common.sh@952 -- # kill -0 100341 00:23:49.088 20:40:10 keyring_file -- common/autotest_common.sh@953 -- # uname 00:23:49.088 20:40:10 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.088 20:40:10 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100341 00:23:49.346 20:40:10 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:49.346 killing process with pid 100341 00:23:49.346 20:40:10 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:49.346 20:40:10 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100341' 00:23:49.346 Received shutdown signal, test time was about 1.000000 seconds 00:23:49.346 00:23:49.346 Latency(us) 00:23:49.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.346 =================================================================================================================== 00:23:49.346 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:49.346 20:40:10 keyring_file -- common/autotest_common.sh@967 -- # kill 100341 00:23:49.346 20:40:10 keyring_file -- common/autotest_common.sh@972 -- # wait 100341 00:23:49.346 20:40:10 keyring_file -- keyring/file.sh@21 -- # killprocess 99837 00:23:49.346 20:40:10 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 99837 ']' 00:23:49.346 20:40:10 keyring_file -- common/autotest_common.sh@952 -- # kill -0 99837 00:23:49.346 20:40:10 keyring_file -- common/autotest_common.sh@953 -- # uname 00:23:49.346 20:40:10 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.346 20:40:10 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 99837 00:23:49.346 20:40:10 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:49.346 killing process with pid 99837 00:23:49.346 20:40:10 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:49.346 20:40:10 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 99837' 00:23:49.346 20:40:10 keyring_file -- common/autotest_common.sh@967 -- # kill 99837 00:23:49.346 [2024-07-15 20:40:10.786221] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:49.346 20:40:10 keyring_file -- common/autotest_common.sh@972 -- # wait 99837 00:23:49.606 00:23:49.606 real 0m17.074s 00:23:49.606 user 0m44.557s 00:23:49.606 sys 0m3.184s 00:23:49.606 20:40:11 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:49.606 20:40:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:23:49.606 ************************************ 00:23:49.606 END TEST keyring_file 00:23:49.606 ************************************ 00:23:49.606 20:40:11 -- common/autotest_common.sh@1142 -- # return 0 00:23:49.606 20:40:11 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:23:49.606 20:40:11 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:49.606 20:40:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:49.606 20:40:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:49.606 20:40:11 -- common/autotest_common.sh@10 -- # set +x 00:23:49.606 ************************************ 00:23:49.606 START TEST keyring_linux 00:23:49.606 ************************************ 00:23:49.867 20:40:11 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:23:49.867 * Looking for test storage... 00:23:49.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:23:49.867 20:40:11 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:23:49.867 20:40:11 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ec49175a-6012-419b-81e2-f6fecd100da5 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=ec49175a-6012-419b-81e2-f6fecd100da5 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:49.867 20:40:11 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.867 20:40:11 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.867 20:40:11 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.867 20:40:11 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.867 20:40:11 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.867 20:40:11 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.867 20:40:11 keyring_linux -- paths/export.sh@5 -- # export PATH 00:23:49.867 20:40:11 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:49.867 20:40:11 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:23:49.867 20:40:11 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:23:49.867 20:40:11 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:23:49.867 20:40:11 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:23:49.867 20:40:11 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:23:49.867 20:40:11 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:23:49.867 20:40:11 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:23:49.867 20:40:11 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:49.867 20:40:11 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:23:49.867 20:40:11 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:23:49.867 20:40:11 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:49.867 20:40:11 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:23:49.867 20:40:11 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@705 -- # python - 00:23:49.867 20:40:11 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:23:49.867 /tmp/:spdk-test:key0 00:23:49.867 20:40:11 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:23:49.867 20:40:11 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:23:49.867 20:40:11 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:23:49.867 20:40:11 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:23:49.867 20:40:11 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:23:49.867 20:40:11 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:23:49.867 20:40:11 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:23:49.867 20:40:11 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:23:49.867 20:40:11 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:23:49.868 20:40:11 keyring_linux -- nvmf/common.sh@705 -- # python - 00:23:49.868 20:40:11 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:23:49.868 /tmp/:spdk-test:key1 00:23:49.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.868 20:40:11 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:23:49.868 20:40:11 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=100490 00:23:49.868 20:40:11 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:49.868 20:40:11 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 100490 00:23:49.868 20:40:11 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100490 ']' 00:23:49.868 20:40:11 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.868 20:40:11 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.868 20:40:11 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.868 20:40:11 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:49.868 20:40:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:50.126 [2024-07-15 20:40:11.395398] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:23:50.126 [2024-07-15 20:40:11.395757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100490 ] 00:23:50.126 [2024-07-15 20:40:11.534179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.126 [2024-07-15 20:40:11.615030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.062 20:40:12 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:51.062 20:40:12 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:23:51.062 20:40:12 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:23:51.062 20:40:12 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.062 20:40:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:51.062 [2024-07-15 20:40:12.452441] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.062 null0 00:23:51.062 [2024-07-15 20:40:12.484413] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:51.062 [2024-07-15 20:40:12.484904] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:51.062 20:40:12 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.062 20:40:12 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:23:51.062 400659297 00:23:51.062 20:40:12 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:23:51.062 792036954 00:23:51.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:51.062 20:40:12 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100532 00:23:51.062 20:40:12 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:23:51.062 20:40:12 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100532 /var/tmp/bperf.sock 00:23:51.062 20:40:12 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 100532 ']' 00:23:51.062 20:40:12 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:51.062 20:40:12 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:51.062 20:40:12 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:51.062 20:40:12 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:51.062 20:40:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:51.320 [2024-07-15 20:40:12.562330] Starting SPDK v24.09-pre git sha1 f8598a71f / DPDK 24.03.0 initialization... 00:23:51.320 [2024-07-15 20:40:12.562422] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100532 ] 00:23:51.320 [2024-07-15 20:40:12.695114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.320 [2024-07-15 20:40:12.755105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.252 20:40:13 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:52.252 20:40:13 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:23:52.252 20:40:13 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:23:52.252 20:40:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:23:52.510 20:40:13 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:23:52.510 20:40:13 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:52.768 20:40:14 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:52.768 20:40:14 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:23:53.025 [2024-07-15 20:40:14.390534] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.025 nvme0n1 00:23:53.025 20:40:14 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:23:53.025 20:40:14 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:23:53.025 20:40:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:53.025 20:40:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:53.026 20:40:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:53.026 20:40:14 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:53.284 20:40:14 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:23:53.284 20:40:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:53.284 20:40:14 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:23:53.284 20:40:14 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:23:53.284 20:40:14 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:23:53.284 20:40:14 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:23:53.284 20:40:14 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:53.850 20:40:15 keyring_linux -- keyring/linux.sh@25 -- # sn=400659297 00:23:53.850 20:40:15 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:23:53.850 20:40:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:53.850 20:40:15 keyring_linux -- keyring/linux.sh@26 -- # [[ 400659297 == \4\0\0\6\5\9\2\9\7 ]] 00:23:53.850 20:40:15 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 400659297 00:23:53.850 20:40:15 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:23:53.850 20:40:15 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:53.850 Running I/O for 1 seconds... 00:23:54.783 00:23:54.783 Latency(us) 00:23:54.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.783 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:54.783 nvme0n1 : 1.01 9212.57 35.99 0.00 0.00 13800.85 8221.79 18350.08 00:23:54.783 =================================================================================================================== 00:23:54.783 Total : 9212.57 35.99 0.00 0.00 13800.85 8221.79 18350.08 00:23:54.783 0 00:23:54.783 20:40:16 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:23:54.783 20:40:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:23:55.349 20:40:16 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:23:55.349 20:40:16 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:23:55.349 20:40:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:23:55.349 20:40:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:23:55.349 20:40:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:23:55.349 20:40:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:23:55.607 20:40:16 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:23:55.607 20:40:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:23:55.607 20:40:16 keyring_linux -- keyring/linux.sh@23 -- # return 00:23:55.607 20:40:16 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:55.607 20:40:16 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:23:55.607 20:40:16 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:55.607 20:40:16 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:23:55.607 20:40:16 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:55.607 20:40:16 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:23:55.607 20:40:16 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:55.607 20:40:16 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:55.607 20:40:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:23:55.864 [2024-07-15 20:40:17.320433] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:55.864 [2024-07-15 20:40:17.320552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x675ea0 (107): Transport endpoint is not connected 00:23:55.864 [2024-07-15 20:40:17.321539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x675ea0 (9): Bad file descriptor 00:23:55.864 [2024-07-15 20:40:17.322535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:55.864 [2024-07-15 20:40:17.322557] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:23:55.864 [2024-07-15 20:40:17.322568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:55.864 2024/07/15 20:40:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:55.865 request: 00:23:55.865 { 00:23:55.865 "method": "bdev_nvme_attach_controller", 00:23:55.865 "params": { 00:23:55.865 "name": "nvme0", 00:23:55.865 "trtype": "tcp", 00:23:55.865 "traddr": "127.0.0.1", 00:23:55.865 "adrfam": "ipv4", 00:23:55.865 "trsvcid": "4420", 00:23:55.865 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:55.865 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:55.865 "prchk_reftag": false, 00:23:55.865 "prchk_guard": false, 00:23:55.865 "hdgst": false, 00:23:55.865 "ddgst": false, 00:23:55.865 "psk": ":spdk-test:key1" 00:23:55.865 } 00:23:55.865 } 00:23:55.865 Got JSON-RPC error response 00:23:55.865 GoRPCClient: error on JSON-RPC call 00:23:55.865 20:40:17 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:23:55.865 20:40:17 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:55.865 20:40:17 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:55.865 20:40:17 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:55.865 20:40:17 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:23:55.865 20:40:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:55.865 20:40:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:23:55.865 20:40:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:23:55.865 20:40:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:23:55.865 20:40:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:23:55.865 20:40:17 keyring_linux -- keyring/linux.sh@33 -- # sn=400659297 00:23:55.865 20:40:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 400659297 00:23:55.865 1 links removed 00:23:55.865 20:40:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:23:55.865 20:40:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:23:55.865 20:40:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:23:55.865 20:40:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:23:55.865 20:40:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:23:55.865 20:40:17 keyring_linux -- keyring/linux.sh@33 -- # sn=792036954 00:23:55.865 20:40:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 792036954 00:23:55.865 1 links removed 00:23:55.865 20:40:17 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100532 00:23:55.865 20:40:17 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100532 ']' 00:23:55.865 20:40:17 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100532 00:23:56.138 20:40:17 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:23:56.138 20:40:17 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:56.138 20:40:17 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100532 00:23:56.138 20:40:17 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:56.138 killing process with pid 100532 00:23:56.138 20:40:17 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:56.138 20:40:17 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100532' 00:23:56.138 20:40:17 keyring_linux -- common/autotest_common.sh@967 -- # kill 100532 00:23:56.138 Received shutdown signal, test time was about 1.000000 seconds 00:23:56.138 00:23:56.138 Latency(us) 00:23:56.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.138 =================================================================================================================== 00:23:56.138 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.138 20:40:17 keyring_linux -- common/autotest_common.sh@972 -- # wait 100532 00:23:56.138 20:40:17 keyring_linux -- keyring/linux.sh@42 -- # killprocess 100490 00:23:56.138 20:40:17 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 100490 ']' 00:23:56.138 20:40:17 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 100490 00:23:56.138 20:40:17 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:23:56.138 20:40:17 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:56.138 20:40:17 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 100490 00:23:56.138 20:40:17 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:56.138 killing process with pid 100490 00:23:56.138 20:40:17 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:56.138 20:40:17 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 100490' 00:23:56.138 20:40:17 keyring_linux -- common/autotest_common.sh@967 -- # kill 100490 00:23:56.138 20:40:17 keyring_linux -- common/autotest_common.sh@972 -- # wait 100490 00:23:56.398 ************************************ 00:23:56.398 END TEST keyring_linux 00:23:56.398 ************************************ 00:23:56.398 00:23:56.398 real 0m6.731s 00:23:56.398 user 0m13.853s 00:23:56.398 sys 0m1.437s 00:23:56.398 20:40:17 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:56.398 20:40:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:23:56.398 20:40:17 -- common/autotest_common.sh@1142 -- # return 0 00:23:56.398 20:40:17 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:23:56.398 20:40:17 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:23:56.398 20:40:17 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:23:56.398 20:40:17 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:23:56.398 20:40:17 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:23:56.398 20:40:17 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:23:56.398 20:40:17 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:23:56.398 20:40:17 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:23:56.398 20:40:17 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:23:56.398 20:40:17 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:23:56.398 20:40:17 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:23:56.398 20:40:17 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:23:56.398 20:40:17 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:23:56.398 20:40:17 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:23:56.398 20:40:17 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:23:56.398 20:40:17 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:23:56.398 20:40:17 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:23:56.398 20:40:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:56.398 20:40:17 -- common/autotest_common.sh@10 -- # set +x 00:23:56.398 20:40:17 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:23:56.398 20:40:17 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:23:56.398 20:40:17 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:23:56.398 20:40:17 -- common/autotest_common.sh@10 -- # set +x 00:23:57.771 INFO: APP EXITING 00:23:57.771 INFO: killing all VMs 00:23:57.771 INFO: killing vhost app 00:23:57.771 INFO: EXIT DONE 00:23:58.704 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:58.704 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:23:58.704 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:23:59.268 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:59.268 Cleaning 00:23:59.268 Removing: /var/run/dpdk/spdk0/config 00:23:59.268 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:23:59.268 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:23:59.268 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:23:59.268 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:23:59.268 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:23:59.268 Removing: /var/run/dpdk/spdk0/hugepage_info 00:23:59.268 Removing: /var/run/dpdk/spdk1/config 00:23:59.268 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:23:59.268 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:23:59.268 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:23:59.268 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:23:59.268 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:23:59.268 Removing: /var/run/dpdk/spdk1/hugepage_info 00:23:59.268 Removing: /var/run/dpdk/spdk2/config 00:23:59.268 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:23:59.268 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:23:59.268 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:23:59.268 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:23:59.268 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:23:59.268 Removing: /var/run/dpdk/spdk2/hugepage_info 00:23:59.268 Removing: /var/run/dpdk/spdk3/config 00:23:59.268 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:23:59.268 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:23:59.268 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:23:59.268 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:23:59.268 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:23:59.268 Removing: /var/run/dpdk/spdk3/hugepage_info 00:23:59.268 Removing: /var/run/dpdk/spdk4/config 00:23:59.268 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:23:59.268 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:23:59.268 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:23:59.268 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:23:59.268 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:23:59.268 Removing: /var/run/dpdk/spdk4/hugepage_info 00:23:59.268 Removing: /dev/shm/nvmf_trace.0 00:23:59.268 Removing: /dev/shm/spdk_tgt_trace.pid60682 00:23:59.268 Removing: /var/run/dpdk/spdk0 00:23:59.268 Removing: /var/run/dpdk/spdk1 00:23:59.268 Removing: /var/run/dpdk/spdk2 00:23:59.268 Removing: /var/run/dpdk/spdk3 00:23:59.268 Removing: /var/run/dpdk/spdk4 00:23:59.268 Removing: /var/run/dpdk/spdk_pid100341 00:23:59.268 Removing: /var/run/dpdk/spdk_pid100490 00:23:59.268 Removing: /var/run/dpdk/spdk_pid100532 00:23:59.268 Removing: /var/run/dpdk/spdk_pid60542 00:23:59.268 Removing: /var/run/dpdk/spdk_pid60682 00:23:59.268 Removing: /var/run/dpdk/spdk_pid60924 00:23:59.268 Removing: /var/run/dpdk/spdk_pid61022 00:23:59.268 Removing: /var/run/dpdk/spdk_pid61056 00:23:59.268 Removing: /var/run/dpdk/spdk_pid61165 00:23:59.268 Removing: /var/run/dpdk/spdk_pid61196 00:23:59.268 Removing: /var/run/dpdk/spdk_pid61314 00:23:59.268 Removing: /var/run/dpdk/spdk_pid61594 00:23:59.268 Removing: /var/run/dpdk/spdk_pid61764 00:23:59.268 Removing: /var/run/dpdk/spdk_pid61846 00:23:59.268 Removing: /var/run/dpdk/spdk_pid61919 00:23:59.268 Removing: /var/run/dpdk/spdk_pid62014 00:23:59.268 Removing: /var/run/dpdk/spdk_pid62047 00:23:59.268 Removing: /var/run/dpdk/spdk_pid62083 00:23:59.268 Removing: /var/run/dpdk/spdk_pid62144 00:23:59.268 Removing: /var/run/dpdk/spdk_pid62243 00:23:59.268 Removing: /var/run/dpdk/spdk_pid62859 00:23:59.268 Removing: /var/run/dpdk/spdk_pid62918 00:23:59.268 Removing: /var/run/dpdk/spdk_pid62987 00:23:59.268 Removing: /var/run/dpdk/spdk_pid63001 00:23:59.268 Removing: /var/run/dpdk/spdk_pid63075 00:23:59.268 Removing: /var/run/dpdk/spdk_pid63095 00:23:59.268 Removing: /var/run/dpdk/spdk_pid63163 00:23:59.268 Removing: /var/run/dpdk/spdk_pid63183 00:23:59.268 Removing: /var/run/dpdk/spdk_pid63229 00:23:59.268 Removing: /var/run/dpdk/spdk_pid63250 00:23:59.268 Removing: /var/run/dpdk/spdk_pid63297 00:23:59.268 Removing: /var/run/dpdk/spdk_pid63326 00:23:59.525 Removing: /var/run/dpdk/spdk_pid63468 00:23:59.525 Removing: /var/run/dpdk/spdk_pid63504 00:23:59.525 Removing: /var/run/dpdk/spdk_pid63578 00:23:59.525 Removing: /var/run/dpdk/spdk_pid63629 00:23:59.525 Removing: /var/run/dpdk/spdk_pid63653 00:23:59.525 Removing: /var/run/dpdk/spdk_pid63712 00:23:59.525 Removing: /var/run/dpdk/spdk_pid63746 00:23:59.525 Removing: /var/run/dpdk/spdk_pid63781 00:23:59.525 Removing: /var/run/dpdk/spdk_pid63814 00:23:59.525 Removing: /var/run/dpdk/spdk_pid63844 00:23:59.525 Removing: /var/run/dpdk/spdk_pid63879 00:23:59.525 Removing: /var/run/dpdk/spdk_pid63913 00:23:59.525 Removing: /var/run/dpdk/spdk_pid63950 00:23:59.525 Removing: /var/run/dpdk/spdk_pid63979 00:23:59.525 Removing: /var/run/dpdk/spdk_pid64019 00:23:59.525 Removing: /var/run/dpdk/spdk_pid64048 00:23:59.525 Removing: /var/run/dpdk/spdk_pid64077 00:23:59.525 Removing: /var/run/dpdk/spdk_pid64118 00:23:59.525 Removing: /var/run/dpdk/spdk_pid64148 00:23:59.525 Removing: /var/run/dpdk/spdk_pid64177 00:23:59.525 Removing: /var/run/dpdk/spdk_pid64217 00:23:59.525 Removing: /var/run/dpdk/spdk_pid64246 00:23:59.525 Removing: /var/run/dpdk/spdk_pid64288 00:23:59.525 Removing: /var/run/dpdk/spdk_pid64321 00:23:59.525 Removing: /var/run/dpdk/spdk_pid64356 00:23:59.525 Removing: /var/run/dpdk/spdk_pid64392 00:23:59.525 Removing: /var/run/dpdk/spdk_pid64457 00:23:59.525 Removing: /var/run/dpdk/spdk_pid64563 00:23:59.525 Removing: /var/run/dpdk/spdk_pid64969 00:23:59.525 Removing: /var/run/dpdk/spdk_pid68265 00:23:59.525 Removing: /var/run/dpdk/spdk_pid68596 00:23:59.525 Removing: /var/run/dpdk/spdk_pid71012 00:23:59.525 Removing: /var/run/dpdk/spdk_pid71370 00:23:59.525 Removing: /var/run/dpdk/spdk_pid71614 00:23:59.525 Removing: /var/run/dpdk/spdk_pid71660 00:23:59.525 Removing: /var/run/dpdk/spdk_pid72284 00:23:59.525 Removing: /var/run/dpdk/spdk_pid72725 00:23:59.525 Removing: /var/run/dpdk/spdk_pid72767 00:23:59.525 Removing: /var/run/dpdk/spdk_pid73120 00:23:59.525 Removing: /var/run/dpdk/spdk_pid73652 00:23:59.525 Removing: /var/run/dpdk/spdk_pid74095 00:23:59.525 Removing: /var/run/dpdk/spdk_pid75010 00:23:59.525 Removing: /var/run/dpdk/spdk_pid75959 00:23:59.525 Removing: /var/run/dpdk/spdk_pid76081 00:23:59.525 Removing: /var/run/dpdk/spdk_pid76143 00:23:59.525 Removing: /var/run/dpdk/spdk_pid77603 00:23:59.525 Removing: /var/run/dpdk/spdk_pid77808 00:23:59.525 Removing: /var/run/dpdk/spdk_pid83217 00:23:59.525 Removing: /var/run/dpdk/spdk_pid83659 00:23:59.525 Removing: /var/run/dpdk/spdk_pid83768 00:23:59.525 Removing: /var/run/dpdk/spdk_pid83914 00:23:59.525 Removing: /var/run/dpdk/spdk_pid83945 00:23:59.525 Removing: /var/run/dpdk/spdk_pid83973 00:23:59.525 Removing: /var/run/dpdk/spdk_pid84005 00:23:59.525 Removing: /var/run/dpdk/spdk_pid84151 00:23:59.525 Removing: /var/run/dpdk/spdk_pid84303 00:23:59.525 Removing: /var/run/dpdk/spdk_pid84559 00:23:59.525 Removing: /var/run/dpdk/spdk_pid84687 00:23:59.525 Removing: /var/run/dpdk/spdk_pid84932 00:23:59.525 Removing: /var/run/dpdk/spdk_pid85048 00:23:59.525 Removing: /var/run/dpdk/spdk_pid85186 00:23:59.525 Removing: /var/run/dpdk/spdk_pid85521 00:23:59.525 Removing: /var/run/dpdk/spdk_pid85948 00:23:59.525 Removing: /var/run/dpdk/spdk_pid86223 00:23:59.525 Removing: /var/run/dpdk/spdk_pid86713 00:23:59.525 Removing: /var/run/dpdk/spdk_pid86720 00:23:59.525 Removing: /var/run/dpdk/spdk_pid87050 00:23:59.525 Removing: /var/run/dpdk/spdk_pid87064 00:23:59.525 Removing: /var/run/dpdk/spdk_pid87078 00:23:59.525 Removing: /var/run/dpdk/spdk_pid87109 00:23:59.525 Removing: /var/run/dpdk/spdk_pid87119 00:23:59.525 Removing: /var/run/dpdk/spdk_pid87471 00:23:59.525 Removing: /var/run/dpdk/spdk_pid87520 00:23:59.525 Removing: /var/run/dpdk/spdk_pid87847 00:23:59.525 Removing: /var/run/dpdk/spdk_pid88080 00:23:59.525 Removing: /var/run/dpdk/spdk_pid88547 00:23:59.525 Removing: /var/run/dpdk/spdk_pid89135 00:23:59.525 Removing: /var/run/dpdk/spdk_pid90515 00:23:59.525 Removing: /var/run/dpdk/spdk_pid91093 00:23:59.525 Removing: /var/run/dpdk/spdk_pid91099 00:23:59.525 Removing: /var/run/dpdk/spdk_pid93045 00:23:59.525 Removing: /var/run/dpdk/spdk_pid93122 00:23:59.525 Removing: /var/run/dpdk/spdk_pid93212 00:23:59.526 Removing: /var/run/dpdk/spdk_pid93298 00:23:59.526 Removing: /var/run/dpdk/spdk_pid93442 00:23:59.526 Removing: /var/run/dpdk/spdk_pid93537 00:23:59.526 Removing: /var/run/dpdk/spdk_pid93623 00:23:59.526 Removing: /var/run/dpdk/spdk_pid93713 00:23:59.526 Removing: /var/run/dpdk/spdk_pid94066 00:23:59.526 Removing: /var/run/dpdk/spdk_pid94724 00:23:59.526 Removing: /var/run/dpdk/spdk_pid96068 00:23:59.526 Removing: /var/run/dpdk/spdk_pid96259 00:23:59.526 Removing: /var/run/dpdk/spdk_pid96541 00:23:59.526 Removing: /var/run/dpdk/spdk_pid96834 00:23:59.526 Removing: /var/run/dpdk/spdk_pid97361 00:23:59.526 Removing: /var/run/dpdk/spdk_pid97366 00:23:59.526 Removing: /var/run/dpdk/spdk_pid97710 00:23:59.526 Removing: /var/run/dpdk/spdk_pid97865 00:23:59.526 Removing: /var/run/dpdk/spdk_pid98021 00:23:59.526 Removing: /var/run/dpdk/spdk_pid98114 00:23:59.526 Removing: /var/run/dpdk/spdk_pid98264 00:23:59.526 Removing: /var/run/dpdk/spdk_pid98371 00:23:59.526 Removing: /var/run/dpdk/spdk_pid99034 00:23:59.526 Removing: /var/run/dpdk/spdk_pid99067 00:23:59.526 Removing: /var/run/dpdk/spdk_pid99100 00:23:59.526 Removing: /var/run/dpdk/spdk_pid99354 00:23:59.526 Removing: /var/run/dpdk/spdk_pid99386 00:23:59.526 Removing: /var/run/dpdk/spdk_pid99416 00:23:59.783 Removing: /var/run/dpdk/spdk_pid99837 00:23:59.783 Removing: /var/run/dpdk/spdk_pid99858 00:23:59.783 Clean 00:23:59.783 20:40:21 -- common/autotest_common.sh@1451 -- # return 0 00:23:59.783 20:40:21 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:23:59.783 20:40:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:59.783 20:40:21 -- common/autotest_common.sh@10 -- # set +x 00:23:59.783 20:40:21 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:23:59.783 20:40:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:59.783 20:40:21 -- common/autotest_common.sh@10 -- # set +x 00:23:59.783 20:40:21 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:59.783 20:40:21 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:23:59.783 20:40:21 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:23:59.783 20:40:21 -- spdk/autotest.sh@391 -- # hash lcov 00:23:59.783 20:40:21 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:23:59.783 20:40:21 -- spdk/autotest.sh@393 -- # hostname 00:23:59.783 20:40:21 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:00.040 geninfo: WARNING: invalid characters removed from testname! 00:24:32.220 20:40:50 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:32.781 20:40:54 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:36.060 20:40:56 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:38.583 20:40:59 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:41.107 20:41:02 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:43.631 20:41:05 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:46.916 20:41:07 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:24:46.916 20:41:07 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:46.916 20:41:07 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:24:46.916 20:41:07 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.916 20:41:07 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.916 20:41:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.916 20:41:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.916 20:41:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.916 20:41:07 -- paths/export.sh@5 -- $ export PATH 00:24:46.916 20:41:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.916 20:41:07 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:24:46.916 20:41:07 -- common/autobuild_common.sh@444 -- $ date +%s 00:24:46.916 20:41:07 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721076067.XXXXXX 00:24:46.916 20:41:07 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721076067.bdHD2f 00:24:46.916 20:41:07 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:24:46.916 20:41:07 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:24:46.916 20:41:07 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:24:46.916 20:41:07 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:24:46.916 20:41:07 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:24:46.916 20:41:07 -- common/autobuild_common.sh@460 -- $ get_config_params 00:24:46.916 20:41:07 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:24:46.916 20:41:07 -- common/autotest_common.sh@10 -- $ set +x 00:24:46.916 20:41:07 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:24:46.916 20:41:07 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:24:46.916 20:41:07 -- pm/common@17 -- $ local monitor 00:24:46.916 20:41:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:46.916 20:41:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:46.916 20:41:07 -- pm/common@25 -- $ sleep 1 00:24:46.916 20:41:07 -- pm/common@21 -- $ date +%s 00:24:46.916 20:41:07 -- pm/common@21 -- $ date +%s 00:24:46.916 20:41:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721076067 00:24:46.916 20:41:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721076067 00:24:46.916 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721076067_collect-vmstat.pm.log 00:24:46.916 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721076067_collect-cpu-load.pm.log 00:24:47.483 20:41:08 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:24:47.483 20:41:08 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:24:47.483 20:41:08 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:24:47.483 20:41:08 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:24:47.483 20:41:08 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:24:47.483 20:41:08 -- spdk/autopackage.sh@19 -- $ timing_finish 00:24:47.483 20:41:08 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:24:47.483 20:41:08 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:24:47.483 20:41:08 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:47.741 20:41:08 -- spdk/autopackage.sh@20 -- $ exit 0 00:24:47.742 20:41:08 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:24:47.742 20:41:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:24:47.742 20:41:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:24:47.742 20:41:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:47.742 20:41:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:24:47.742 20:41:08 -- pm/common@44 -- $ pid=102256 00:24:47.742 20:41:08 -- pm/common@50 -- $ kill -TERM 102256 00:24:47.742 20:41:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:24:47.742 20:41:08 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:24:47.742 20:41:08 -- pm/common@44 -- $ pid=102258 00:24:47.742 20:41:08 -- pm/common@50 -- $ kill -TERM 102258 00:24:47.742 + [[ -n 5154 ]] 00:24:47.742 + sudo kill 5154 00:24:47.751 [Pipeline] } 00:24:47.772 [Pipeline] // timeout 00:24:47.778 [Pipeline] } 00:24:47.796 [Pipeline] // stage 00:24:47.802 [Pipeline] } 00:24:47.820 [Pipeline] // catchError 00:24:47.830 [Pipeline] stage 00:24:47.832 [Pipeline] { (Stop VM) 00:24:47.847 [Pipeline] sh 00:24:48.125 + vagrant halt 00:24:52.310 ==> default: Halting domain... 00:24:57.585 [Pipeline] sh 00:24:57.863 + vagrant destroy -f 00:25:02.100 ==> default: Removing domain... 00:25:02.113 [Pipeline] sh 00:25:02.389 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:25:02.400 [Pipeline] } 00:25:02.420 [Pipeline] // stage 00:25:02.427 [Pipeline] } 00:25:02.445 [Pipeline] // dir 00:25:02.450 [Pipeline] } 00:25:02.463 [Pipeline] // wrap 00:25:02.470 [Pipeline] } 00:25:02.510 [Pipeline] // catchError 00:25:02.545 [Pipeline] stage 00:25:02.547 [Pipeline] { (Epilogue) 00:25:02.556 [Pipeline] sh 00:25:02.830 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:09.408 [Pipeline] catchError 00:25:09.411 [Pipeline] { 00:25:09.426 [Pipeline] sh 00:25:09.706 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:09.963 Artifacts sizes are good 00:25:09.971 [Pipeline] } 00:25:09.988 [Pipeline] // catchError 00:25:10.000 [Pipeline] archiveArtifacts 00:25:10.008 Archiving artifacts 00:25:10.184 [Pipeline] cleanWs 00:25:10.197 [WS-CLEANUP] Deleting project workspace... 00:25:10.197 [WS-CLEANUP] Deferred wipeout is used... 00:25:10.203 [WS-CLEANUP] done 00:25:10.205 [Pipeline] } 00:25:10.224 [Pipeline] // stage 00:25:10.230 [Pipeline] } 00:25:10.247 [Pipeline] // node 00:25:10.253 [Pipeline] End of Pipeline 00:25:10.284 Finished: SUCCESS